Test Report: Docker_Linux_crio 21808

                    
                      530458d3ecd77092debe1aca48846101c1a78c03:2025-11-02:42171
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 15.17
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 145.66
38 TestAddons/parallel/InspektorGadget 5.24
39 TestAddons/parallel/MetricsServer 6.32
41 TestAddons/parallel/CSI 48.68
42 TestAddons/parallel/Headlamp 2.51
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.12
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 6.27
97 TestFunctional/parallel/ServiceCmdConnect 602.88
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.57
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.06
197 TestJSONOutput/unpause/Command 1.59
264 TestPause/serial/Pause 6.15
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.73
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.33
358 TestStartStop/group/old-k8s-version/serial/Pause 6.46
360 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.26
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.43
371 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.29
380 TestStartStop/group/newest-cni/serial/Pause 6.79
384 TestStartStop/group/no-preload/serial/Pause 5.4
388 TestStartStop/group/embed-certs/serial/Pause 6.16
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.83
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable volcano --alsologtostderr -v=1: exit status 11 (256.09797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:49:54.128184   22745 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:49:54.128511   22745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:49:54.128521   22745 out.go:374] Setting ErrFile to fd 2...
	I1102 12:49:54.128525   22745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:49:54.128724   22745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:49:54.128957   22745 mustload.go:66] Loading cluster: addons-341255
	I1102 12:49:54.129286   22745 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:49:54.129300   22745 addons.go:607] checking whether the cluster is paused
	I1102 12:49:54.129385   22745 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:49:54.129401   22745 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:49:54.129840   22745 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:49:54.150975   22745 ssh_runner.go:195] Run: systemctl --version
	I1102 12:49:54.151033   22745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:49:54.170360   22745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:49:54.270302   22745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:49:54.270384   22745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:49:54.299095   22745 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:49:54.299123   22745 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:49:54.299126   22745 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:49:54.299129   22745 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:49:54.299132   22745 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:49:54.299135   22745 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:49:54.299137   22745 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:49:54.299139   22745 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:49:54.299149   22745 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:49:54.299157   22745 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:49:54.299163   22745 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:49:54.299165   22745 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:49:54.299168   22745 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:49:54.299170   22745 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:49:54.299172   22745 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:49:54.299184   22745 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:49:54.299191   22745 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:49:54.299196   22745 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:49:54.299198   22745 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:49:54.299201   22745 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:49:54.299203   22745 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:49:54.299205   22745 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:49:54.299208   22745 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:49:54.299210   22745 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:49:54.299215   22745 cri.go:89] found id: ""
	I1102 12:49:54.299261   22745 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:49:54.313832   22745 out.go:203] 
	W1102 12:49:54.315122   22745 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:49:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:49:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:49:54.315145   22745 out.go:285] * 
	* 
	W1102 12:49:54.318096   22745 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:49:54.319602   22745 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.973344ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003049844s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004039724s
addons_test.go:392: (dbg) Run:  kubectl --context addons-341255 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-341255 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-341255 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.693676604s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 ip
2025/11/02 12:50:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable registry --alsologtostderr -v=1: exit status 11 (249.787915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:18.147221   25639 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:18.147521   25639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:18.147531   25639 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:18.147535   25639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:18.147749   25639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:18.148013   25639 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:18.148365   25639 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:18.148381   25639 addons.go:607] checking whether the cluster is paused
	I1102 12:50:18.148460   25639 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:18.148475   25639 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:18.148884   25639 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:18.166743   25639 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:18.166794   25639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:18.184413   25639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:18.284491   25639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:18.284601   25639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:18.314279   25639 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:18.314312   25639 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:18.314316   25639 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:18.314319   25639 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:18.314322   25639 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:18.314326   25639 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:18.314329   25639 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:18.314332   25639 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:18.314334   25639 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:18.314348   25639 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:18.314352   25639 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:18.314356   25639 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:18.314359   25639 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:18.314361   25639 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:18.314364   25639 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:18.314370   25639 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:18.314377   25639 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:18.314383   25639 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:18.314387   25639 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:18.314391   25639 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:18.314395   25639 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:18.314398   25639 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:18.314402   25639 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:18.314405   25639 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:18.314409   25639 cri.go:89] found id: ""
	I1102 12:50:18.314468   25639 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:18.333067   25639 out.go:203] 
	W1102 12:50:18.334302   25639 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:18.334322   25639 out.go:285] * 
	* 
	W1102 12:50:18.337410   25639 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:18.339131   25639 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.026127ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-341255
addons_test.go:332: (dbg) Run:  kubectl --context addons-341255 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (283.3074ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:09.767332   24459 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:09.767719   24459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:09.767731   24459 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:09.767738   24459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:09.768015   24459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:09.768326   24459 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:09.768760   24459 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:09.768780   24459 addons.go:607] checking whether the cluster is paused
	I1102 12:50:09.768918   24459 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:09.768942   24459 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:09.769366   24459 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:09.792256   24459 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:09.792326   24459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:09.814517   24459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:09.919077   24459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:09.919160   24459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:09.948978   24459 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:09.948998   24459 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:09.949002   24459 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:09.949006   24459 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:09.949008   24459 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:09.949020   24459 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:09.949023   24459 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:09.949025   24459 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:09.949028   24459 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:09.949033   24459 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:09.949036   24459 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:09.949038   24459 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:09.949041   24459 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:09.949043   24459 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:09.949046   24459 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:09.949053   24459 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:09.949056   24459 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:09.949059   24459 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:09.949061   24459 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:09.949064   24459 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:09.949066   24459 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:09.949069   24459 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:09.949071   24459 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:09.949073   24459 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:09.949075   24459 cri.go:89] found id: ""
	I1102 12:50:09.949109   24459 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:09.963929   24459 out.go:203] 
	W1102 12:50:09.965236   24459 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:09.965261   24459 out.go:285] * 
	* 
	W1102 12:50:09.969180   24459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:09.970425   24459 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-341255 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-341255 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-341255 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [807a22b6-521a-4c73-b442-1d755e1dd743] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [807a22b6-521a-4c73-b442-1d755e1dd743] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00435104s
I1102 12:50:16.855900   12914 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.201699854s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-341255 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-341255
helpers_test.go:243: (dbg) docker inspect addons-341255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec",
	        "Created": "2025-11-02T12:47:34.624656749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T12:47:34.658243617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/hostname",
	        "HostsPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/hosts",
	        "LogPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec-json.log",
	        "Name": "/addons-341255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-341255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-341255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec",
	                "LowerDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-341255",
	                "Source": "/var/lib/docker/volumes/addons-341255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-341255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-341255",
	                "name.minikube.sigs.k8s.io": "addons-341255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4d1397366a860695a2410391510b199369552311154ae2bc32a86e1e8a53e10",
	            "SandboxKey": "/var/run/docker/netns/e4d1397366a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-341255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:6a:18:ea:88:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8d6efd86cb40045fd0347c60946cae75d49fbfd2c9b2e46da512cdb65f1946b",
	                    "EndpointID": "883d6fe78cfc3269fef654e0fe55892390369f64cccd73e3cae3bc2c05a2fc25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-341255",
	                        "29b8f38f8195"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-341255 -n addons-341255
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-341255 logs -n 25: (1.119707489s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-594380 --alsologtostderr --binary-mirror http://127.0.0.1:44217 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-594380 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ delete  │ -p binary-mirror-594380                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-594380 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ addons  │ enable dashboard -p addons-341255                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-341255                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ start   │ -p addons-341255 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:49 UTC │
	│ addons  │ addons-341255 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:49 UTC │                     │
	│ addons  │ addons-341255 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-341255 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-341255                                                                                                                                                                                                                                                                                                                                                                                           │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │ 02 Nov 25 12:50 UTC │
	│ addons  │ addons-341255 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ ssh     │ addons-341255 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ ip      │ addons-341255 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │ 02 Nov 25 12:50 UTC │
	│ addons  │ addons-341255 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ ssh     │ addons-341255 ssh cat /opt/local-path-provisioner/pvc-a6c8ab11-af96-4d9e-befc-978d62d9294e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │ 02 Nov 25 12:50 UTC │
	│ addons  │ addons-341255 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ addons-341255 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:51 UTC │                     │
	│ addons  │ addons-341255 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:51 UTC │                     │
	│ ip      │ addons-341255 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-341255        │ jenkins │ v1.37.0 │ 02 Nov 25 12:52 UTC │ 02 Nov 25 12:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 12:47:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 12:47:10.250973   14235 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:47:10.251260   14235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:10.251270   14235 out.go:374] Setting ErrFile to fd 2...
	I1102 12:47:10.251274   14235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:10.251443   14235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:47:10.251935   14235 out.go:368] Setting JSON to false
	I1102 12:47:10.252856   14235 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1782,"bootTime":1762085848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:47:10.252939   14235 start.go:143] virtualization: kvm guest
	I1102 12:47:10.254846   14235 out.go:179] * [addons-341255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 12:47:10.256214   14235 notify.go:221] Checking for updates...
	I1102 12:47:10.256241   14235 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 12:47:10.257444   14235 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:47:10.258968   14235 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:47:10.260422   14235 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:47:10.261701   14235 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 12:47:10.263102   14235 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 12:47:10.264485   14235 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:47:10.287640   14235 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:47:10.287744   14235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:10.342495   14235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-02 12:47:10.333417346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:10.342620   14235 docker.go:319] overlay module found
	I1102 12:47:10.344275   14235 out.go:179] * Using the docker driver based on user configuration
	I1102 12:47:10.345411   14235 start.go:309] selected driver: docker
	I1102 12:47:10.345430   14235 start.go:930] validating driver "docker" against <nil>
	I1102 12:47:10.345439   14235 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 12:47:10.346016   14235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:10.397240   14235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-02 12:47:10.388322177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:10.397391   14235 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 12:47:10.397649   14235 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 12:47:10.399420   14235 out.go:179] * Using Docker driver with root privileges
	I1102 12:47:10.400658   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:10.400718   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:10.400728   14235 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 12:47:10.400787   14235 start.go:353] cluster config:
	{Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1102 12:47:10.402196   14235 out.go:179] * Starting "addons-341255" primary control-plane node in "addons-341255" cluster
	I1102 12:47:10.403533   14235 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 12:47:10.404890   14235 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 12:47:10.406200   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:10.406237   14235 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 12:47:10.406243   14235 cache.go:59] Caching tarball of preloaded images
	I1102 12:47:10.406310   14235 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 12:47:10.406310   14235 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 12:47:10.406321   14235 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 12:47:10.406675   14235 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json ...
	I1102 12:47:10.406700   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json: {Name:mk8cc4f6201cd536994d4ff0636752c655b01dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:10.423274   14235 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 12:47:10.423394   14235 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 12:47:10.423415   14235 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1102 12:47:10.423423   14235 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1102 12:47:10.423430   14235 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1102 12:47:10.423436   14235 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1102 12:47:22.860304   14235 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1102 12:47:22.860358   14235 cache.go:233] Successfully downloaded all kic artifacts
	I1102 12:47:22.860397   14235 start.go:360] acquireMachinesLock for addons-341255: {Name:mkf563c157e84d426caa00e0d150636e69ae60c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 12:47:22.860487   14235 start.go:364] duration metric: took 71.877µs to acquireMachinesLock for "addons-341255"
	I1102 12:47:22.860511   14235 start.go:93] Provisioning new machine with config: &{Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 12:47:22.860608   14235 start.go:125] createHost starting for "" (driver="docker")
	I1102 12:47:22.862286   14235 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1102 12:47:22.862496   14235 start.go:159] libmachine.API.Create for "addons-341255" (driver="docker")
	I1102 12:47:22.862525   14235 client.go:173] LocalClient.Create starting
	I1102 12:47:22.862639   14235 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 12:47:23.190130   14235 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 12:47:23.235220   14235 cli_runner.go:164] Run: docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 12:47:23.252356   14235 cli_runner.go:211] docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 12:47:23.252418   14235 network_create.go:284] running [docker network inspect addons-341255] to gather additional debugging logs...
	I1102 12:47:23.252436   14235 cli_runner.go:164] Run: docker network inspect addons-341255
	W1102 12:47:23.268251   14235 cli_runner.go:211] docker network inspect addons-341255 returned with exit code 1
	I1102 12:47:23.268278   14235 network_create.go:287] error running [docker network inspect addons-341255]: docker network inspect addons-341255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-341255 not found
	I1102 12:47:23.268300   14235 network_create.go:289] output of [docker network inspect addons-341255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-341255 not found
	
	** /stderr **
	I1102 12:47:23.268463   14235 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 12:47:23.284913   14235 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ceacf0}
	I1102 12:47:23.284956   14235 network_create.go:124] attempt to create docker network addons-341255 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1102 12:47:23.285003   14235 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-341255 addons-341255
	I1102 12:47:23.340652   14235 network_create.go:108] docker network addons-341255 192.168.49.0/24 created
	I1102 12:47:23.340681   14235 kic.go:121] calculated static IP "192.168.49.2" for the "addons-341255" container
	I1102 12:47:23.340732   14235 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 12:47:23.356328   14235 cli_runner.go:164] Run: docker volume create addons-341255 --label name.minikube.sigs.k8s.io=addons-341255 --label created_by.minikube.sigs.k8s.io=true
	I1102 12:47:23.373686   14235 oci.go:103] Successfully created a docker volume addons-341255
	I1102 12:47:23.373752   14235 cli_runner.go:164] Run: docker run --rm --name addons-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --entrypoint /usr/bin/test -v addons-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 12:47:30.226022   14235 cli_runner.go:217] Completed: docker run --rm --name addons-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --entrypoint /usr/bin/test -v addons-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.85222317s)
	I1102 12:47:30.226066   14235 oci.go:107] Successfully prepared a docker volume addons-341255
	I1102 12:47:30.226104   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:30.226127   14235 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 12:47:30.226198   14235 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 12:47:34.550982   14235 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324748588s)
	I1102 12:47:34.551010   14235 kic.go:203] duration metric: took 4.324880929s to extract preloaded images to volume ...
	W1102 12:47:34.551110   14235 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 12:47:34.551142   14235 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 12:47:34.551176   14235 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 12:47:34.609448   14235 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-341255 --name addons-341255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-341255 --network addons-341255 --ip 192.168.49.2 --volume addons-341255:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 12:47:34.892001   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Running}}
	I1102 12:47:34.910209   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:34.927310   14235 cli_runner.go:164] Run: docker exec addons-341255 stat /var/lib/dpkg/alternatives/iptables
	I1102 12:47:34.977823   14235 oci.go:144] the created container "addons-341255" has a running status.
	I1102 12:47:34.977856   14235 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa...
	I1102 12:47:35.288151   14235 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 12:47:35.314349   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:35.332530   14235 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 12:47:35.332553   14235 kic_runner.go:114] Args: [docker exec --privileged addons-341255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 12:47:35.379067   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:35.396410   14235 machine.go:94] provisionDockerMachine start ...
	I1102 12:47:35.396504   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.412963   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.413305   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.413329   14235 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 12:47:35.554407   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-341255
	
	I1102 12:47:35.554431   14235 ubuntu.go:182] provisioning hostname "addons-341255"
	I1102 12:47:35.554491   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.572293   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.572588   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.572609   14235 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-341255 && echo "addons-341255" | sudo tee /etc/hostname
	I1102 12:47:35.718544   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-341255
	
	I1102 12:47:35.718655   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.736541   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.736751   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.736780   14235 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-341255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-341255/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-341255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 12:47:35.873883   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 12:47:35.873908   14235 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 12:47:35.873932   14235 ubuntu.go:190] setting up certificates
	I1102 12:47:35.873947   14235 provision.go:84] configureAuth start
	I1102 12:47:35.873996   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:35.891130   14235 provision.go:143] copyHostCerts
	I1102 12:47:35.891207   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 12:47:35.891320   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 12:47:35.891384   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 12:47:35.891436   14235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.addons-341255 san=[127.0.0.1 192.168.49.2 addons-341255 localhost minikube]
	I1102 12:47:36.382190   14235 provision.go:177] copyRemoteCerts
	I1102 12:47:36.382245   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 12:47:36.382279   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.398892   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:36.497344   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 12:47:36.515470   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1102 12:47:36.531754   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 12:47:36.547973   14235 provision.go:87] duration metric: took 674.01494ms to configureAuth
	I1102 12:47:36.548000   14235 ubuntu.go:206] setting minikube options for container-runtime
	I1102 12:47:36.548143   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:36.548241   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.564916   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:36.565119   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:36.565136   14235 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 12:47:36.811352   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 12:47:36.811379   14235 machine.go:97] duration metric: took 1.414948314s to provisionDockerMachine
	I1102 12:47:36.811392   14235 client.go:176] duration metric: took 13.948860905s to LocalClient.Create
	I1102 12:47:36.811414   14235 start.go:167] duration metric: took 13.948919577s to libmachine.API.Create "addons-341255"
	I1102 12:47:36.811421   14235 start.go:293] postStartSetup for "addons-341255" (driver="docker")
	I1102 12:47:36.811433   14235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 12:47:36.811505   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 12:47:36.811552   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.829245   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:36.930618   14235 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 12:47:36.934088   14235 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 12:47:36.934116   14235 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 12:47:36.934127   14235 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 12:47:36.934177   14235 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 12:47:36.934202   14235 start.go:296] duration metric: took 122.776223ms for postStartSetup
	I1102 12:47:36.934473   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:36.951179   14235 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json ...
	I1102 12:47:36.951435   14235 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 12:47:36.951472   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.969135   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.064397   14235 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 12:47:37.068751   14235 start.go:128] duration metric: took 14.208129897s to createHost
	I1102 12:47:37.068776   14235 start.go:83] releasing machines lock for "addons-341255", held for 14.208276854s
	I1102 12:47:37.068845   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:37.086200   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 12:47:37.086248   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 12:47:37.086272   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 12:47:37.086295   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	W1102 12:47:37.086358   14235 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: no such file or directory
	I1102 12:47:37.086411   14235 ssh_runner.go:195] Run: cat /version.json
	I1102 12:47:37.086444   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:37.086508   14235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 12:47:37.086582   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:37.105842   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.106245   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.257171   14235 ssh_runner.go:195] Run: systemctl --version
	I1102 12:47:37.263197   14235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 12:47:37.294370   14235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 12:47:37.298739   14235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 12:47:37.298790   14235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 12:47:37.323638   14235 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 12:47:37.323665   14235 start.go:496] detecting cgroup driver to use...
	I1102 12:47:37.323697   14235 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 12:47:37.323739   14235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 12:47:37.338687   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 12:47:37.350279   14235 docker.go:218] disabling cri-docker service (if available) ...
	I1102 12:47:37.350336   14235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 12:47:37.366364   14235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 12:47:37.383198   14235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 12:47:37.461758   14235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 12:47:37.546986   14235 docker.go:234] disabling docker service ...
	I1102 12:47:37.547052   14235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 12:47:37.564406   14235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 12:47:37.576538   14235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 12:47:37.659080   14235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 12:47:37.739432   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 12:47:37.751738   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 12:47:37.765160   14235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 12:47:37.765210   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.774895   14235 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 12:47:37.774951   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.783540   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.792240   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.800981   14235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 12:47:37.808871   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.817512   14235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.830758   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.839311   14235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 12:47:37.846795   14235 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1102 12:47:37.846853   14235 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1102 12:47:37.858888   14235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 12:47:37.867623   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:37.945257   14235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 12:47:38.046677   14235 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 12:47:38.046754   14235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 12:47:38.050653   14235 start.go:564] Will wait 60s for crictl version
	I1102 12:47:38.050712   14235 ssh_runner.go:195] Run: which crictl
	I1102 12:47:38.054226   14235 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 12:47:38.078417   14235 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 12:47:38.078540   14235 ssh_runner.go:195] Run: crio --version
	I1102 12:47:38.105053   14235 ssh_runner.go:195] Run: crio --version
	I1102 12:47:38.133361   14235 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 12:47:38.134434   14235 cli_runner.go:164] Run: docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 12:47:38.150664   14235 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1102 12:47:38.154548   14235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 12:47:38.164365   14235 kubeadm.go:884] updating cluster {Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 12:47:38.164476   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:38.164523   14235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 12:47:38.193799   14235 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 12:47:38.193818   14235 crio.go:433] Images already preloaded, skipping extraction
	I1102 12:47:38.193858   14235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 12:47:38.218494   14235 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 12:47:38.218516   14235 cache_images.go:86] Images are preloaded, skipping loading
	I1102 12:47:38.218524   14235 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1102 12:47:38.218634   14235 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-341255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 12:47:38.218713   14235 ssh_runner.go:195] Run: crio config
	I1102 12:47:38.261174   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:38.261193   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:38.261205   14235 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 12:47:38.261226   14235 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-341255 NodeName:addons-341255 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 12:47:38.261352   14235 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-341255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 12:47:38.261411   14235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 12:47:38.269401   14235 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 12:47:38.269460   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 12:47:38.277052   14235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1102 12:47:38.289179   14235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 12:47:38.304119   14235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1102 12:47:38.316420   14235 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1102 12:47:38.319840   14235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 12:47:38.329295   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:38.410137   14235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 12:47:38.435055   14235 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255 for IP: 192.168.49.2
	I1102 12:47:38.435077   14235 certs.go:195] generating shared ca certs ...
	I1102 12:47:38.435097   14235 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.435232   14235 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 12:47:38.769624   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt ...
	I1102 12:47:38.769661   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: {Name:mkd7e9806d5c59b627e491ddb10238af7d2db0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.769833   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key ...
	I1102 12:47:38.769845   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key: {Name:mka520025ec19fdfd442874b83fab35cad4035b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.769917   14235 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 12:47:38.792815   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt ...
	I1102 12:47:38.792839   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt: {Name:mk025efd1cd72c556595edea83c8f3ac5302e128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.792977   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key ...
	I1102 12:47:38.792987   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key: {Name:mk41b6975b4f28b3c6e2653413a7758d0d49b443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.793049   14235 certs.go:257] generating profile certs ...
	I1102 12:47:38.793103   14235 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key
	I1102 12:47:38.793116   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt with IP's: []
	I1102 12:47:39.068199   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt ...
	I1102 12:47:39.068228   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: {Name:mk5652c64a00c1081056c7430e928129af43b585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.068383   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key ...
	I1102 12:47:39.068396   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key: {Name:mk805628c23d7a2a67e0aca1f815503019fbc6a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.068466   14235 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c
	I1102 12:47:39.068485   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1102 12:47:39.358212   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c ...
	I1102 12:47:39.358242   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c: {Name:mkec8b9bcafff68570ea12859728f4971cf333e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.358398   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c ...
	I1102 12:47:39.358411   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c: {Name:mk5e6983ccda7941e58d81c2f99ae7bf50363979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.358494   14235 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt
	I1102 12:47:39.358604   14235 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key
	I1102 12:47:39.358661   14235 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key
	I1102 12:47:39.358680   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt with IP's: []
	I1102 12:47:39.475727   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt ...
	I1102 12:47:39.475764   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt: {Name:mk2bc60461dd6f04b91a711910f304d7c9377359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.475913   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key ...
	I1102 12:47:39.475924   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key: {Name:mk42c30c70b7f69d5f760c39a21e7262ba065d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.476075   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 12:47:39.476113   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 12:47:39.476132   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 12:47:39.476153   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 12:47:39.476700   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 12:47:39.493858   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 12:47:39.510591   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 12:47:39.527008   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 12:47:39.543052   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 12:47:39.558828   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 12:47:39.574925   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 12:47:39.591468   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 12:47:39.607514   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 12:47:39.625332   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 12:47:39.637447   14235 ssh_runner.go:195] Run: openssl version
	I1102 12:47:39.643280   14235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 12:47:39.653602   14235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.657271   14235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.657358   14235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.691223   14235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 12:47:39.699879   14235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 12:47:39.703313   14235 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 12:47:39.703369   14235 kubeadm.go:401] StartCluster: {Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:47:39.703431   14235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:47:39.703477   14235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:47:39.728822   14235 cri.go:89] found id: ""
	I1102 12:47:39.728896   14235 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 12:47:39.736780   14235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 12:47:39.744195   14235 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 12:47:39.744248   14235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 12:47:39.751322   14235 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 12:47:39.751339   14235 kubeadm.go:158] found existing configuration files:
	
	I1102 12:47:39.751378   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 12:47:39.758748   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 12:47:39.758791   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 12:47:39.765507   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 12:47:39.772405   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 12:47:39.772452   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 12:47:39.779276   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 12:47:39.786437   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 12:47:39.786503   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 12:47:39.793237   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 12:47:39.800058   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 12:47:39.800129   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 12:47:39.807124   14235 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 12:47:39.840703   14235 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 12:47:39.840775   14235 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 12:47:39.860076   14235 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 12:47:39.860168   14235 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 12:47:39.860231   14235 kubeadm.go:319] OS: Linux
	I1102 12:47:39.860342   14235 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 12:47:39.860434   14235 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 12:47:39.860508   14235 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 12:47:39.860595   14235 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 12:47:39.860663   14235 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 12:47:39.860729   14235 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 12:47:39.860797   14235 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 12:47:39.860865   14235 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 12:47:39.927451   14235 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 12:47:39.927638   14235 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 12:47:39.927775   14235 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 12:47:39.936166   14235 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 12:47:39.938837   14235 out.go:252]   - Generating certificates and keys ...
	I1102 12:47:39.938936   14235 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 12:47:39.939037   14235 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 12:47:40.551502   14235 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 12:47:40.616977   14235 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 12:47:40.823531   14235 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 12:47:40.874776   14235 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 12:47:40.910654   14235 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 12:47:40.910777   14235 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 12:47:41.014439   14235 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 12:47:41.014636   14235 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 12:47:41.123901   14235 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 12:47:41.619712   14235 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 12:47:41.774419   14235 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 12:47:41.774515   14235 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 12:47:42.036727   14235 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 12:47:42.129305   14235 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 12:47:42.437928   14235 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 12:47:42.482408   14235 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 12:47:42.547403   14235 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 12:47:42.547996   14235 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 12:47:42.551684   14235 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 12:47:42.553131   14235 out.go:252]   - Booting up control plane ...
	I1102 12:47:42.553216   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 12:47:42.553284   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 12:47:42.553951   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 12:47:42.567382   14235 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 12:47:42.567493   14235 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 12:47:42.573931   14235 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 12:47:42.574186   14235 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 12:47:42.574231   14235 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 12:47:42.670477   14235 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 12:47:42.670666   14235 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 12:47:43.172077   14235 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.755082ms
	I1102 12:47:43.175115   14235 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 12:47:43.175254   14235 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1102 12:47:43.175396   14235 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 12:47:43.175511   14235 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 12:47:45.230820   14235 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.055613744s
	I1102 12:47:45.334789   14235 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.159623843s
	I1102 12:47:46.676913   14235 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50171218s
	I1102 12:47:46.687183   14235 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 12:47:46.696476   14235 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 12:47:46.704332   14235 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 12:47:46.704683   14235 kubeadm.go:319] [mark-control-plane] Marking the node addons-341255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 12:47:46.712318   14235 kubeadm.go:319] [bootstrap-token] Using token: mcu3nc.y8g41xelym1jkr4a
	I1102 12:47:46.713883   14235 out.go:252]   - Configuring RBAC rules ...
	I1102 12:47:46.713995   14235 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 12:47:46.717088   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 12:47:46.722068   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 12:47:46.724390   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 12:47:46.726629   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 12:47:46.729957   14235 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 12:47:47.082989   14235 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 12:47:47.499441   14235 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 12:47:48.082735   14235 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 12:47:48.083611   14235 kubeadm.go:319] 
	I1102 12:47:48.083677   14235 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 12:47:48.083685   14235 kubeadm.go:319] 
	I1102 12:47:48.083785   14235 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 12:47:48.083800   14235 kubeadm.go:319] 
	I1102 12:47:48.083835   14235 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 12:47:48.083945   14235 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 12:47:48.084030   14235 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 12:47:48.084040   14235 kubeadm.go:319] 
	I1102 12:47:48.084096   14235 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 12:47:48.084105   14235 kubeadm.go:319] 
	I1102 12:47:48.084169   14235 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 12:47:48.084179   14235 kubeadm.go:319] 
	I1102 12:47:48.084266   14235 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 12:47:48.084347   14235 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 12:47:48.084453   14235 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 12:47:48.084468   14235 kubeadm.go:319] 
	I1102 12:47:48.084600   14235 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 12:47:48.084715   14235 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 12:47:48.084725   14235 kubeadm.go:319] 
	I1102 12:47:48.084845   14235 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mcu3nc.y8g41xelym1jkr4a \
	I1102 12:47:48.084956   14235 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 12:47:48.084991   14235 kubeadm.go:319] 	--control-plane 
	I1102 12:47:48.085013   14235 kubeadm.go:319] 
	I1102 12:47:48.085143   14235 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 12:47:48.085154   14235 kubeadm.go:319] 
	I1102 12:47:48.085259   14235 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mcu3nc.y8g41xelym1jkr4a \
	I1102 12:47:48.085383   14235 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 12:47:48.087049   14235 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 12:47:48.087192   14235 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 12:47:48.087217   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:48.087227   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:48.089685   14235 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 12:47:48.090949   14235 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 12:47:48.095164   14235 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 12:47:48.095180   14235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 12:47:48.107821   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 12:47:48.309800   14235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 12:47:48.309881   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:48.309942   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-341255 minikube.k8s.io/updated_at=2025_11_02T12_47_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=addons-341255 minikube.k8s.io/primary=true
	I1102 12:47:48.319382   14235 ops.go:34] apiserver oom_adj: -16
	I1102 12:47:48.398111   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:48.898272   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:49.398735   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:49.898492   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:50.398651   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:50.898170   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:51.399062   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:51.898409   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:52.398479   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:52.898480   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:53.398249   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:53.463295   14235 kubeadm.go:1114] duration metric: took 5.153486386s to wait for elevateKubeSystemPrivileges
	I1102 12:47:53.463362   14235 kubeadm.go:403] duration metric: took 13.759997374s to StartCluster
	I1102 12:47:53.463386   14235 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:53.463542   14235 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:47:53.464211   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:53.464998   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 12:47:53.465061   14235 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 12:47:53.465094   14235 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1102 12:47:53.465246   14235 addons.go:70] Setting yakd=true in profile "addons-341255"
	I1102 12:47:53.465257   14235 addons.go:70] Setting ingress-dns=true in profile "addons-341255"
	I1102 12:47:53.465275   14235 addons.go:239] Setting addon yakd=true in "addons-341255"
	I1102 12:47:53.465276   14235 addons.go:239] Setting addon ingress-dns=true in "addons-341255"
	I1102 12:47:53.465293   14235 addons.go:70] Setting registry-creds=true in profile "addons-341255"
	I1102 12:47:53.465317   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465320   14235 addons.go:239] Setting addon registry-creds=true in "addons-341255"
	I1102 12:47:53.465335   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465353   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465386   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:53.465422   14235 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-341255"
	I1102 12:47:53.465453   14235 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-341255"
	I1102 12:47:53.465476   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465525   14235 addons.go:70] Setting gcp-auth=true in profile "addons-341255"
	I1102 12:47:53.465587   14235 mustload.go:66] Loading cluster: addons-341255
	I1102 12:47:53.465598   14235 addons.go:70] Setting metrics-server=true in profile "addons-341255"
	I1102 12:47:53.465650   14235 addons.go:239] Setting addon metrics-server=true in "addons-341255"
	I1102 12:47:53.465769   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465804   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:53.465902   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465916   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465922   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465951   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466101   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466121   14235 addons.go:70] Setting inspektor-gadget=true in profile "addons-341255"
	I1102 12:47:53.466135   14235 addons.go:239] Setting addon inspektor-gadget=true in "addons-341255"
	I1102 12:47:53.466157   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466445   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466795   14235 out.go:179] * Verifying Kubernetes components...
	I1102 12:47:53.466514   14235 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-341255"
	I1102 12:47:53.467023   14235 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-341255"
	I1102 12:47:53.467048   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.467507   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466540   14235 addons.go:70] Setting default-storageclass=true in profile "addons-341255"
	I1102 12:47:53.468260   14235 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-341255"
	I1102 12:47:53.466553   14235 addons.go:70] Setting ingress=true in profile "addons-341255"
	I1102 12:47:53.469938   14235 addons.go:239] Setting addon ingress=true in "addons-341255"
	I1102 12:47:53.469992   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.470521   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.470884   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:53.466574   14235 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-341255"
	I1102 12:47:53.471122   14235 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-341255"
	I1102 12:47:53.471151   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.471638   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466584   14235 addons.go:70] Setting registry=true in profile "addons-341255"
	I1102 12:47:53.472755   14235 addons.go:239] Setting addon registry=true in "addons-341255"
	I1102 12:47:53.472787   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466591   14235 addons.go:70] Setting cloud-spanner=true in profile "addons-341255"
	I1102 12:47:53.472855   14235 addons.go:239] Setting addon cloud-spanner=true in "addons-341255"
	I1102 12:47:53.472887   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.473269   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.473324   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466600   14235 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-341255"
	I1102 12:47:53.473929   14235 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-341255"
	I1102 12:47:53.466607   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466608   14235 addons.go:70] Setting storage-provisioner=true in profile "addons-341255"
	I1102 12:47:53.474178   14235 addons.go:239] Setting addon storage-provisioner=true in "addons-341255"
	I1102 12:47:53.474205   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466624   14235 addons.go:70] Setting volcano=true in profile "addons-341255"
	I1102 12:47:53.474362   14235 addons.go:239] Setting addon volcano=true in "addons-341255"
	I1102 12:47:53.474459   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466646   14235 addons.go:70] Setting volumesnapshots=true in profile "addons-341255"
	I1102 12:47:53.475626   14235 addons.go:239] Setting addon volumesnapshots=true in "addons-341255"
	I1102 12:47:53.475670   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.482016   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482560   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482638   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482726   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.484947   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.528159   14235 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1102 12:47:53.531987   14235 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 12:47:53.532027   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1102 12:47:53.532107   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.534620   14235 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1102 12:47:53.537818   14235 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 12:47:53.537857   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1102 12:47:53.537924   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.543747   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.561171   14235 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1102 12:47:53.562600   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1102 12:47:53.563501   14235 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 12:47:53.563525   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1102 12:47:53.563704   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.563937   14235 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1102 12:47:53.565177   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1102 12:47:53.568068   14235 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-341255"
	I1102 12:47:53.568123   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.568548   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.570697   14235 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1102 12:47:53.570721   14235 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1102 12:47:53.570792   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.578647   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1102 12:47:53.578720   14235 out.go:179]   - Using image docker.io/registry:3.0.0
	I1102 12:47:53.578852   14235 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1102 12:47:53.580407   14235 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1102 12:47:53.582140   14235 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1102 12:47:53.582165   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1102 12:47:53.582265   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.582290   14235 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1102 12:47:53.582333   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1102 12:47:53.582373   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1102 12:47:53.582383   14235 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1102 12:47:53.582438   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.582704   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:47:53.583732   14235 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1102 12:47:53.584768   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1102 12:47:53.584905   14235 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1102 12:47:53.584920   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1102 12:47:53.584969   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.585326   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1102 12:47:53.585342   14235 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1102 12:47:53.585406   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.585806   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:47:53.586871   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1102 12:47:53.588600   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1102 12:47:53.589621   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	W1102 12:47:53.589879   14235 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1102 12:47:53.590767   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1102 12:47:53.590909   14235 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 12:47:53.590925   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1102 12:47:53.590979   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.592810   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1102 12:47:53.592946   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1102 12:47:53.593034   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.596279   14235 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 12:47:53.597808   14235 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 12:47:53.597845   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 12:47:53.597904   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.606292   14235 addons.go:239] Setting addon default-storageclass=true in "addons-341255"
	I1102 12:47:53.606403   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.606846   14235 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1102 12:47:53.606903   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.608893   14235 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 12:47:53.608990   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1102 12:47:53.609159   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.608948   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1102 12:47:53.609084   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.612282   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1102 12:47:53.612362   14235 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1102 12:47:53.612489   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.622183   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.627001   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.635658   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.640697   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.642258   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 12:47:53.658965   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.660375   14235 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1102 12:47:53.664196   14235 out.go:179]   - Using image docker.io/busybox:stable
	I1102 12:47:53.664495   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.664980   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.665424   14235 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 12:47:53.665440   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1102 12:47:53.665501   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.667626   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.670348   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.670803   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.675540   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.679392   14235 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 12:47:53.679409   14235 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 12:47:53.679456   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.679758   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	W1102 12:47:53.693122   14235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 12:47:53.693345   14235 retry.go:31] will retry after 313.675455ms: ssh: handshake failed: EOF
	I1102 12:47:53.716337   14235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 12:47:53.716534   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.722694   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	W1102 12:47:53.723727   14235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 12:47:53.723755   14235 retry.go:31] will retry after 322.850877ms: ssh: handshake failed: EOF
	I1102 12:47:53.777933   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 12:47:53.808230   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 12:47:53.817989   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 12:47:53.823436   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 12:47:53.826730   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1102 12:47:53.826751   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1102 12:47:53.835803   14235 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1102 12:47:53.835827   14235 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1102 12:47:53.849964   14235 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:53.849992   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1102 12:47:53.852988   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1102 12:47:53.853015   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1102 12:47:53.861927   14235 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1102 12:47:53.861953   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1102 12:47:53.866335   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1102 12:47:53.872773   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1102 12:47:53.872800   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1102 12:47:53.873757   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 12:47:53.882609   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 12:47:53.885222   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1102 12:47:53.895602   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1102 12:47:53.895642   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1102 12:47:53.897627   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1102 12:47:53.897652   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1102 12:47:53.898405   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:53.903669   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1102 12:47:53.903692   14235 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1102 12:47:53.931865   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1102 12:47:53.931897   14235 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1102 12:47:53.933673   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1102 12:47:53.933692   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1102 12:47:53.948149   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1102 12:47:53.948248   14235 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1102 12:47:53.961762   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1102 12:47:53.961785   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1102 12:47:53.981494   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1102 12:47:53.981598   14235 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1102 12:47:53.990228   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 12:47:53.990302   14235 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1102 12:47:54.000508   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1102 12:47:54.000531   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1102 12:47:54.018204   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1102 12:47:54.018316   14235 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1102 12:47:54.042219   14235 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1102 12:47:54.044055   14235 node_ready.go:35] waiting up to 6m0s for node "addons-341255" to be "Ready" ...
	I1102 12:47:54.044760   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1102 12:47:54.044824   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1102 12:47:54.057842   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1102 12:47:54.057868   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1102 12:47:54.074582   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 12:47:54.089200   14235 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 12:47:54.089222   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1102 12:47:54.092349   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1102 12:47:54.092368   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1102 12:47:54.111925   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1102 12:47:54.138560   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1102 12:47:54.138618   14235 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1102 12:47:54.139464   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 12:47:54.188471   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1102 12:47:54.188500   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1102 12:47:54.224244   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 12:47:54.231111   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1102 12:47:54.231188   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1102 12:47:54.246226   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 12:47:54.255585   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 12:47:54.255610   14235 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1102 12:47:54.296748   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 12:47:54.560161   14235 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-341255" context rescaled to 1 replicas
	I1102 12:47:55.098684   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.216035837s)
	I1102 12:47:55.098730   14235 addons.go:480] Verifying addon ingress=true in "addons-341255"
	I1102 12:47:55.098862   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.213599091s)
	I1102 12:47:55.098891   14235 addons.go:480] Verifying addon registry=true in "addons-341255"
	I1102 12:47:55.099042   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.200604214s)
	W1102 12:47:55.099071   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.099093   14235 retry.go:31] will retry after 151.321007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.099127   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.024500033s)
	I1102 12:47:55.099145   14235 addons.go:480] Verifying addon metrics-server=true in "addons-341255"
	I1102 12:47:55.100210   14235 out.go:179] * Verifying registry addon...
	I1102 12:47:55.100210   14235 out.go:179] * Verifying ingress addon...
	I1102 12:47:55.101250   14235 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-341255 service yakd-dashboard -n yakd-dashboard
	
	I1102 12:47:55.103174   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1102 12:47:55.103179   14235 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1102 12:47:55.105582   14235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 12:47:55.105600   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:55.107481   14235 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1102 12:47:55.107498   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:55.250874   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:55.413455   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.273950835s)
	W1102 12:47:55.413506   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 12:47:55.413508   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.189222288s)
	I1102 12:47:55.413529   14235 retry.go:31] will retry after 251.618983ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 12:47:55.413548   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.167291037s)
	I1102 12:47:55.413791   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.116826152s)
	I1102 12:47:55.413814   14235 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-341255"
	I1102 12:47:55.415159   14235 out.go:179] * Verifying csi-hostpath-driver addon...
	I1102 12:47:55.417656   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1102 12:47:55.420473   14235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 12:47:55.420493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:55.605914   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:55.606105   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:55.666024   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1102 12:47:55.856462   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.856547   14235 retry.go:31] will retry after 529.760049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.919989   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:56.046431   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:47:56.106274   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:56.106360   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:56.387082   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:56.421043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:56.607090   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:56.607147   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:56.920863   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:57.106951   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:57.106951   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:57.421051   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:57.606912   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:57.607000   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:57.920473   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:58.047093   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:47:58.106431   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:58.106601   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:58.152961   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486895479s)
	I1102 12:47:58.152997   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.76588472s)
	W1102 12:47:58.153027   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:58.153046   14235 retry.go:31] will retry after 778.208022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:58.421148   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:58.606768   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:58.606953   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:58.921240   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:58.932349   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:59.106763   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:59.106912   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:59.420589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:59.451546   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:59.451592   14235 retry.go:31] will retry after 1.196848603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:59.606706   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:59.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:59.920755   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:00.106867   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:00.106940   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:00.420634   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:00.547063   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:00.606975   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:00.607154   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:00.649195   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:00.920837   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:01.106337   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:01.106378   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:01.168336   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:01.168370   14235 retry.go:31] will retry after 1.572153218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:01.172231   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1102 12:48:01.172301   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:48:01.188952   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:48:01.292806   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1102 12:48:01.304964   14235 addons.go:239] Setting addon gcp-auth=true in "addons-341255"
	I1102 12:48:01.305017   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:48:01.305360   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:48:01.323723   14235 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1102 12:48:01.323775   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:48:01.340837   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:48:01.420612   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:01.437339   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:48:01.438869   14235 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1102 12:48:01.439790   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1102 12:48:01.439803   14235 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1102 12:48:01.451974   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1102 12:48:01.451995   14235 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1102 12:48:01.463749   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 12:48:01.463768   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1102 12:48:01.475701   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 12:48:01.607031   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:01.607086   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:01.771715   14235 addons.go:480] Verifying addon gcp-auth=true in "addons-341255"
	I1102 12:48:01.773280   14235 out.go:179] * Verifying gcp-auth addon...
	I1102 12:48:01.775261   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1102 12:48:01.777392   14235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1102 12:48:01.777430   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:01.921461   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:02.106397   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:02.106611   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:02.277912   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:02.420774   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:02.547543   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:02.606298   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:02.606590   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:02.741521   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:02.778999   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:02.920750   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:03.106523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:03.106623   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:03.270903   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:03.270944   14235 retry.go:31] will retry after 1.823023277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:03.278343   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:03.421551   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:03.606257   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:03.606352   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:03.777923   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:03.920409   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:04.106656   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:04.106786   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:04.278388   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:04.421612   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:04.606785   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:04.607048   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:04.778122   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:04.920550   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:05.046898   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:05.095073   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:05.106549   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:05.107406   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:05.278345   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:05.421065   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:05.607082   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:05.607185   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:05.617124   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:05.617158   14235 retry.go:31] will retry after 4.122018669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:05.778626   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:05.921240   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:06.106479   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:06.106690   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:06.278167   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:06.420844   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:06.605814   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:06.605832   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:06.778313   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:06.920726   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:07.047487   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:07.106038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:07.106227   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:07.278903   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:07.420637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:07.606600   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:07.606678   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:07.778420   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:07.920771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:08.105915   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:08.106075   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:08.278896   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:08.421337   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:08.606336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:08.606444   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:08.777923   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:08.920613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:09.105818   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:09.106060   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:09.278655   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:09.420506   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:09.546928   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:09.606292   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:09.606387   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:09.739470   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:09.778355   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:09.920801   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:10.106469   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:10.106610   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:10.265150   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:10.265182   14235 retry.go:31] will retry after 2.515147563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:10.278694   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:10.420716   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:10.606747   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:10.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:10.778734   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:10.920550   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:11.106616   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:11.106682   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:11.277920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:11.420662   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:11.547076   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:11.606510   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:11.606718   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:11.778103   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:11.920943   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:12.106223   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:12.106425   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:12.278777   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:12.420673   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:12.605948   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:12.606206   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:12.778623   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:12.780704   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:12.921007   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:13.105721   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:13.105880   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:13.278788   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:13.310288   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:13.310319   14235 retry.go:31] will retry after 8.074968626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:13.421298   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:13.606613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:13.606782   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:13.778406   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:13.921100   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:14.046502   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:14.105994   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:14.106077   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:14.278679   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:14.420502   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:14.606113   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:14.606351   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:14.777942   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:14.920374   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:15.106024   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:15.106280   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:15.277841   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:15.420425   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:15.606316   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:15.606401   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:15.777950   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:15.920645   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:16.046899   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:16.106552   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:16.106583   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:16.277918   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:16.420498   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:16.606476   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:16.606671   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:16.778211   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:16.920933   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:17.105662   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:17.105805   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:17.278161   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:17.420823   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:17.605844   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:17.605988   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:17.778270   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:17.921238   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:18.105901   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:18.106169   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:18.278617   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:18.420593   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:18.547245   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:18.605941   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:18.606092   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:18.778681   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:18.920170   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:19.105993   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:19.106072   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:19.278523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:19.421212   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:19.606195   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:19.606404   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:19.778900   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:19.920211   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:20.105900   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:20.105978   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:20.278641   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:20.420121   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:20.547444   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:20.605771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:20.605945   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:20.778613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:20.921095   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:21.105920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:21.106128   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:21.278807   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:21.385997   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:21.420905   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:21.606551   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:21.606638   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:21.778831   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:21.907610   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:21.907641   14235 retry.go:31] will retry after 10.635923497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:21.921006   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:22.106052   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:22.106273   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:22.278390   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:22.421210   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:22.606014   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:22.606055   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:22.778459   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:22.921129   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:23.047682   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:23.106086   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:23.106295   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:23.277799   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:23.420543   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:23.605887   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:23.606063   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:23.778555   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:23.921287   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:24.106412   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:24.106478   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:24.277747   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:24.420375   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:24.606483   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:24.606604   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:24.777804   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:24.920309   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:25.106336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:25.106542   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:25.278187   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:25.420636   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:25.547335   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:25.605589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:25.605788   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:25.777856   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:25.921927   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:26.105825   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:26.105986   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:26.278523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:26.421224   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:26.606465   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:26.606682   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:26.778038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:26.920669   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:27.106661   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:27.106746   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:27.278230   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:27.421006   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:27.606989   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:27.608085   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:27.778771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:27.920333   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:28.046880   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:28.106782   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:28.106984   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:28.278720   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:28.420392   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:28.606561   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:28.606739   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:28.778278   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:28.920913   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:29.105911   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:29.106124   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:29.278735   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:29.420226   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:29.606603   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:29.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:29.778237   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:29.920846   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:30.047164   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:30.106544   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:30.106717   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:30.278604   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:30.421238   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:30.606152   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:30.606389   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:30.777744   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:30.920261   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:31.106391   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:31.106614   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:31.278435   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:31.420937   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:31.606251   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:31.606681   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:31.777851   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:31.920521   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:32.106452   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:32.106668   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:32.278248   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:32.421089   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:32.544263   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1102 12:48:32.547084   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:32.607040   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:32.607115   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:32.778637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:32.921239   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:33.072632   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:33.072662   14235 retry.go:31] will retry after 15.068864638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:33.106223   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:33.106264   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:33.278770   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:33.420299   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:33.606714   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:33.606889   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:33.778624   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:33.921321   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:34.106541   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:34.106686   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:34.278102   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:34.420750   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:34.547373   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:34.608996   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:34.609151   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:34.780460   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:34.921676   14235 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 12:48:34.921703   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:35.047666   14235 node_ready.go:49] node "addons-341255" is "Ready"
	I1102 12:48:35.047701   14235 node_ready.go:38] duration metric: took 41.003557274s for node "addons-341255" to be "Ready" ...
	I1102 12:48:35.047720   14235 api_server.go:52] waiting for apiserver process to appear ...
	I1102 12:48:35.047776   14235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 12:48:35.066112   14235 api_server.go:72] duration metric: took 41.601005894s to wait for apiserver process to appear ...
	I1102 12:48:35.066145   14235 api_server.go:88] waiting for apiserver healthz status ...
	I1102 12:48:35.066170   14235 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1102 12:48:35.071526   14235 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1102 12:48:35.072436   14235 api_server.go:141] control plane version: v1.34.1
	I1102 12:48:35.072463   14235 api_server.go:131] duration metric: took 6.309312ms to wait for apiserver health ...
	I1102 12:48:35.072474   14235 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 12:48:35.076254   14235 system_pods.go:59] 20 kube-system pods found
	I1102 12:48:35.076287   14235 system_pods.go:61] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.076296   14235 system_pods.go:61] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.076307   14235 system_pods.go:61] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.076315   14235 system_pods.go:61] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.076323   14235 system_pods.go:61] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.076333   14235 system_pods.go:61] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.076338   14235 system_pods.go:61] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.076348   14235 system_pods.go:61] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.076353   14235 system_pods.go:61] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.076364   14235 system_pods.go:61] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.076371   14235 system_pods.go:61] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.076377   14235 system_pods.go:61] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.076387   14235 system_pods.go:61] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.076397   14235 system_pods.go:61] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.076408   14235 system_pods.go:61] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.076432   14235 system_pods.go:61] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.076441   14235 system_pods.go:61] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.076453   14235 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.076466   14235 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.076473   14235 system_pods.go:61] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 12:48:35.076485   14235 system_pods.go:74] duration metric: took 4.003475ms to wait for pod list to return data ...
	I1102 12:48:35.076494   14235 default_sa.go:34] waiting for default service account to be created ...
	I1102 12:48:35.078954   14235 default_sa.go:45] found service account: "default"
	I1102 12:48:35.078979   14235 default_sa.go:55] duration metric: took 2.478323ms for default service account to be created ...
	I1102 12:48:35.078990   14235 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 12:48:35.175784   14235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 12:48:35.175810   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:35.176429   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:35.180811   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.180845   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.180855   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.180865   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.180872   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.180880   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.180886   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.180891   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.180896   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.180901   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.180908   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.180913   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.180918   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.180925   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.180934   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.180941   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.180949   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.180956   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.180966   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.180974   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.180981   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 12:48:35.180997   14235 retry.go:31] will retry after 297.366179ms: missing components: kube-dns
	I1102 12:48:35.280371   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:35.422944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:35.484258   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.484422   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.484448   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.484482   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.484508   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.484529   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.484549   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.484671   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.484721   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.484730   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.484740   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.484746   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.484752   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.484760   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.484802   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.484823   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.484842   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.484864   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.484912   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.484933   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.484941   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Running
	I1102 12:48:35.484961   14235 retry.go:31] will retry after 298.621934ms: missing components: kube-dns
	I1102 12:48:35.608600   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:35.608644   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:35.779658   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:35.789685   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.789722   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.789730   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Running
	I1102 12:48:35.789740   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.789748   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.789765   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.789772   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.789780   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.789785   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.789790   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.789797   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.789802   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.789807   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.789815   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.789823   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.789831   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.789841   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.789855   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.789862   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.789871   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.789877   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Running
	I1102 12:48:35.789886   14235 system_pods.go:126] duration metric: took 710.88901ms to wait for k8s-apps to be running ...
	I1102 12:48:35.789926   14235 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 12:48:35.789999   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 12:48:35.842767   14235 system_svc.go:56] duration metric: took 52.861179ms WaitForService to wait for kubelet
	I1102 12:48:35.842848   14235 kubeadm.go:587] duration metric: took 42.377746056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 12:48:35.842873   14235 node_conditions.go:102] verifying NodePressure condition ...
	I1102 12:48:35.846143   14235 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 12:48:35.846175   14235 node_conditions.go:123] node cpu capacity is 8
	I1102 12:48:35.846193   14235 node_conditions.go:105] duration metric: took 3.31399ms to run NodePressure ...
	I1102 12:48:35.846207   14235 start.go:242] waiting for startup goroutines ...
	I1102 12:48:35.921493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:36.106204   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:36.106430   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:36.278066   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:36.421161   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:36.607038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:36.607099   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:36.778743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:36.920812   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:37.106745   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:37.106809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:37.278333   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:37.421618   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:37.606754   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:37.606799   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:37.778902   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:37.920956   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:38.107063   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:38.107068   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:38.279028   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:38.421848   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:38.607296   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:38.607468   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:38.778184   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:38.921335   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:39.132262   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:39.132270   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:39.279168   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:39.421141   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:39.606885   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:39.606928   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:39.778633   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:39.920959   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:40.106861   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:40.106934   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:40.278890   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:40.421036   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:40.607151   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:40.607327   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:40.779128   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:40.921722   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:41.107033   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:41.107239   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:41.279382   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:41.421575   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:41.606743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:41.606840   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:41.778765   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:41.921650   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:42.106621   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:42.106809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:42.278603   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:42.535808   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:42.646506   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:42.646665   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:42.825039   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:42.927668   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:43.106312   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:43.106317   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:43.280636   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:43.421920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:43.607772   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:43.607800   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:43.778434   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:43.921783   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:44.106996   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:44.107142   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:44.278737   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:44.420797   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:44.606903   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:44.606938   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:44.778355   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:44.920840   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:45.106378   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:45.106542   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:45.278429   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:45.421463   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:45.606303   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:45.606526   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:45.778866   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:45.922226   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:46.107813   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:46.108043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:46.278126   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:46.421232   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:46.607231   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:46.607274   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:46.778895   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:46.921184   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:47.106847   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:47.106862   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:47.310201   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:47.421556   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:47.606405   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:47.606474   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:47.779084   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:47.921029   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:48.106993   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:48.107056   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:48.142158   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:48.278316   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:48.421697   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:48.607689   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:48.607886   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:48.778800   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:48.795362   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:48.795396   14235 retry.go:31] will retry after 13.784301391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:48.921521   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:49.106654   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:49.106666   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:49.278932   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:49.421072   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:49.607088   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:49.607260   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:49.778950   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:49.920936   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:50.106795   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:50.106873   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:50.278699   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:50.420502   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:50.606354   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:50.606357   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:50.778320   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:50.921026   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:51.107044   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:51.107149   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:51.279144   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:51.421276   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:51.609021   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:51.609205   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:51.778890   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:51.921043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:52.106776   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:52.106827   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:52.277945   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:52.421166   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:52.641750   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:52.642323   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:52.790432   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:52.967968   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:53.107297   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:53.107350   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:53.278906   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:53.421773   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:53.607244   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:53.607275   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:53.779015   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:53.921129   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:54.107336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:54.107442   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:54.277452   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:54.421332   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:54.606221   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:54.606248   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:54.778892   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:54.920772   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:55.106681   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:55.106825   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:55.279287   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:55.422668   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:55.609431   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:55.610069   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:55.779032   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:55.921112   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:56.107000   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:56.107117   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:56.278948   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:56.420606   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:56.629810   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:56.629844   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:56.793764   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:56.920919   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:57.106424   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:57.106456   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:57.278343   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:57.421460   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:57.607043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:57.607164   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:57.778640   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:57.921324   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:58.106493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:58.106672   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:58.278637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:58.421589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:58.606449   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:58.606524   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:58.777939   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:58.921181   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:59.107277   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:59.107394   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:59.278094   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:59.421300   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:59.606520   14235 kapi.go:107] duration metric: took 1m4.503340629s to wait for kubernetes.io/minikube-addons=registry ...
	I1102 12:48:59.606529   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:59.777838   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:59.920944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:00.108115   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:00.279325   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:00.422247   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:00.607539   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:00.778291   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:00.921675   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:01.106647   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:01.412933   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:01.420870   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:01.609061   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:01.778942   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:01.921116   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:02.106947   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:02.279416   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:02.421699   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:02.580794   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:49:02.606812   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:02.778239   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:02.921175   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:03.106778   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:49:03.243619   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:49:03.243646   14235 retry.go:31] will retry after 48.637805342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:49:03.278537   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:03.421480   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:03.606429   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:03.777956   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:03.920922   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:04.107039   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:04.278886   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:04.421520   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:04.606146   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:04.779252   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:04.921234   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:05.106999   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:05.278546   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:05.421686   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:05.606793   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:05.778599   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:05.921484   14235 kapi.go:107] duration metric: took 1m10.503824309s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1102 12:49:06.106131   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:06.278743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:06.606809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:06.779944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:07.106751   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:07.278447   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:07.607111   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:07.779501   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:08.107488   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:08.278365   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:08.606601   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:08.779243   14235 kapi.go:107] duration metric: took 1m7.003978908s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1102 12:49:08.780712   14235 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-341255 cluster.
	I1102 12:49:08.781825   14235 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1102 12:49:08.782890   14235 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1102 12:49:09.107735   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:09.607884   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:10.106244   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:10.606792   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:11.106455   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:11.606841   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:12.109093   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:12.606630   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:13.107233   14235 kapi.go:107] duration metric: took 1m18.004062175s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1102 12:49:51.882912   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1102 12:49:52.401747   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 12:49:52.401834   14235 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1102 12:49:52.404016   14235 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, yakd, nvidia-device-plugin, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1102 12:49:52.405252   14235 addons.go:515] duration metric: took 1m58.940157614s for enable addons: enabled=[registry-creds amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server yakd nvidia-device-plugin storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1102 12:49:52.405296   14235 start.go:247] waiting for cluster config update ...
	I1102 12:49:52.405315   14235 start.go:256] writing updated cluster config ...
	I1102 12:49:52.405557   14235 ssh_runner.go:195] Run: rm -f paused
	I1102 12:49:52.409334   14235 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 12:49:52.412629   14235 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pvw29" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.416399   14235 pod_ready.go:94] pod "coredns-66bc5c9577-pvw29" is "Ready"
	I1102 12:49:52.416419   14235 pod_ready.go:86] duration metric: took 3.773029ms for pod "coredns-66bc5c9577-pvw29" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.418022   14235 pod_ready.go:83] waiting for pod "etcd-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.421469   14235 pod_ready.go:94] pod "etcd-addons-341255" is "Ready"
	I1102 12:49:52.421493   14235 pod_ready.go:86] duration metric: took 3.451366ms for pod "etcd-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.423166   14235 pod_ready.go:83] waiting for pod "kube-apiserver-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.426368   14235 pod_ready.go:94] pod "kube-apiserver-addons-341255" is "Ready"
	I1102 12:49:52.426387   14235 pod_ready.go:86] duration metric: took 3.202034ms for pod "kube-apiserver-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.428189   14235 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.813015   14235 pod_ready.go:94] pod "kube-controller-manager-addons-341255" is "Ready"
	I1102 12:49:52.813042   14235 pod_ready.go:86] duration metric: took 384.835847ms for pod "kube-controller-manager-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.013041   14235 pod_ready.go:83] waiting for pod "kube-proxy-prdwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.412594   14235 pod_ready.go:94] pod "kube-proxy-prdwm" is "Ready"
	I1102 12:49:53.412621   14235 pod_ready.go:86] duration metric: took 399.556047ms for pod "kube-proxy-prdwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.613189   14235 pod_ready.go:83] waiting for pod "kube-scheduler-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:54.012911   14235 pod_ready.go:94] pod "kube-scheduler-addons-341255" is "Ready"
	I1102 12:49:54.012941   14235 pod_ready.go:86] duration metric: took 399.725275ms for pod "kube-scheduler-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:54.012952   14235 pod_ready.go:40] duration metric: took 1.60358564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 12:49:54.056329   14235 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 12:49:54.058276   14235 out.go:179] * Done! kubectl is now configured to use "addons-341255" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.488883533Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-xrfb9/POD" id=df21b13f-68a7-46de-89a2-ea95f3f6e13f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.488995286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.498129322Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xrfb9 Namespace:default ID:89f5fa95446eaeac493b1d453b5783c34d54002d7f556c98dcc24266dac5cb95 UID:70c5182f-d5af-4056-8632-886cf1576d53 NetNS:/var/run/netns/9eddc954-2083-4d52-a6b7-aa82745467a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00068c410}] Aliases:map[]}"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.498189271Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-xrfb9 to CNI network \"kindnet\" (type=ptp)"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.509048916Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xrfb9 Namespace:default ID:89f5fa95446eaeac493b1d453b5783c34d54002d7f556c98dcc24266dac5cb95 UID:70c5182f-d5af-4056-8632-886cf1576d53 NetNS:/var/run/netns/9eddc954-2083-4d52-a6b7-aa82745467a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00068c410}] Aliases:map[]}"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.509197742Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-xrfb9 for CNI network kindnet (type=ptp)"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.510166205Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.51098464Z" level=info msg="Ran pod sandbox 89f5fa95446eaeac493b1d453b5783c34d54002d7f556c98dcc24266dac5cb95 with infra container: default/hello-world-app-5d498dc89-xrfb9/POD" id=df21b13f-68a7-46de-89a2-ea95f3f6e13f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.512179223Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=027ba683-9aa5-404d-935b-d707b1967e3e name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.512319438Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=027ba683-9aa5-404d-935b-d707b1967e3e name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.512354525Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=027ba683-9aa5-404d-935b-d707b1967e3e name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.512988563Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=31f4f5d2-7850-4582-908e-e826894f5c09 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.520422325Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.899115182Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=31f4f5d2-7850-4582-908e-e826894f5c09 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.899675669Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=da7392b7-9166-4e77-9a53-13fdc34ec4b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.901484118Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=391d839b-c656-42b5-b9da-32f475b937b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.906902455Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-xrfb9/hello-world-app" id=e0ea1547-0c54-4a4f-8597-39d69540e52a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.907046439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.912394782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.912613219Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1236aee378150295b72becb50d11b7e90c42b3e1a77fa4fc64774f6455fddcd2/merged/etc/passwd: no such file or directory"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.912644025Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1236aee378150295b72becb50d11b7e90c42b3e1a77fa4fc64774f6455fddcd2/merged/etc/group: no such file or directory"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.912901189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.945368341Z" level=info msg="Created container 3b65a0578097830cbac3fef14dc7cd38ac6ce0ea0d042e2517b51fe8319f0ba7: default/hello-world-app-5d498dc89-xrfb9/hello-world-app" id=e0ea1547-0c54-4a4f-8597-39d69540e52a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.946053405Z" level=info msg="Starting container: 3b65a0578097830cbac3fef14dc7cd38ac6ce0ea0d042e2517b51fe8319f0ba7" id=ac186aa8-3a2e-4cef-8b83-2f96bc35a6e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 12:52:31 addons-341255 crio[777]: time="2025-11-02T12:52:31.94813709Z" level=info msg="Started container" PID=10087 containerID=3b65a0578097830cbac3fef14dc7cd38ac6ce0ea0d042e2517b51fe8319f0ba7 description=default/hello-world-app-5d498dc89-xrfb9/hello-world-app id=ac186aa8-3a2e-4cef-8b83-2f96bc35a6e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=89f5fa95446eaeac493b1d453b5783c34d54002d7f556c98dcc24266dac5cb95
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	3b65a05780978       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   89f5fa95446ea       hello-world-app-5d498dc89-xrfb9             default
	b7219056e0b6b       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   5d1e26bcd28ff       registry-creds-764b6fb674-xqr5t             kube-system
	13b6b0a282d83       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   ec20056c76ee8       nginx                                       default
	32e3229ddec35       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   44db3cb59aabc       busybox                                     default
	fbe639b2e9dc1       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   b80477cbd65d4       ingress-nginx-controller-675c5ddd98-f7qb7   ingress-nginx
	01ffcd58446f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   dfe86d2ee286b       gcp-auth-78565c9fb4-c6tbn                   gcp-auth
	10c8828416e15       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	335dea3014c65       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	d965e3c9f58f1       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	22e0d656997f4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	51708e3b1d7e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   708bf147a941d       gadget-5bvt2                                gadget
	0f107dadfe187       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	594c6f0eb785a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   2280e43bab659       registry-proxy-2rjx9                        kube-system
	b27bf0b460e8b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   378277074f44c       nvidia-device-plugin-daemonset-5g45d        kube-system
	0e2b30bfc0c00       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   4fa6d53b3e519       amd-gpu-device-plugin-kjxsc                 kube-system
	bb2515742aa6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   85a81da1438ed       snapshot-controller-7d9fbc56b8-lrxfs        kube-system
	80c697c3d1f58       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	b8a160d819000       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   f5cdc54ab01d6       csi-hostpath-resizer-0                      kube-system
	7f356839f6e13       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago            Exited              patch                                    1                   30540d3818a4a       ingress-nginx-admission-patch-28fhs         ingress-nginx
	778f25bd3fb1d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   0d7be58985e4c       ingress-nginx-admission-create-6nwnq        ingress-nginx
	12ae95cb9bed4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   f00e7f7c7f2fb       snapshot-controller-7d9fbc56b8-d8c66        kube-system
	69fcc9180e578       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   53c67a2d10336       csi-hostpath-attacher-0                     kube-system
	5ae01cdfa3f78       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   8cacc2f156fb5       cloud-spanner-emulator-86bd5cbb97-qg8w6     default
	55e4c687c6f1b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   fb9109f719725       local-path-provisioner-648f6765c9-9x2dm     local-path-storage
	450cdc62b3458       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   94e4054affe7d       registry-6b586f9694-w59vr                   kube-system
	9ae7b5a96cea4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   23ebd676dee25       yakd-dashboard-5ff678cb9-plk2f              yakd-dashboard
	2513ea12acbf8       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   7c68ce2f49499       metrics-server-85b7d694d7-gxjkw             kube-system
	41eb7ad7b2799       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   82443591c18eb       kube-ingress-dns-minikube                   kube-system
	589499b7daf04       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   c8fd67887d232       coredns-66bc5c9577-pvw29                    kube-system
	157ed615657fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   3641637f6bf88       storage-provisioner                         kube-system
	b21a6de10950e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   3c0f818d95012       kindnet-wsss9                               kube-system
	8a34986297bdf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   cd40f3f885433       kube-proxy-prdwm                            kube-system
	597a4d36c6b41       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   77289814384a2       kube-scheduler-addons-341255                kube-system
	4bccdcbc84a5c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   bb5efe6569e63       kube-controller-manager-addons-341255       kube-system
	16ef3c3243c97       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   6f9d6af1fdbec       etcd-addons-341255                          kube-system
	566e394627151       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   272e93517f3bd       kube-apiserver-addons-341255                kube-system
	
	
	==> coredns [589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce] <==
	[INFO] 10.244.0.22:43792 - 47265 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007054296s
	[INFO] 10.244.0.22:41718 - 37512 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004654744s
	[INFO] 10.244.0.22:35947 - 1239 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005426991s
	[INFO] 10.244.0.22:50538 - 21845 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00479228s
	[INFO] 10.244.0.22:44237 - 38764 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004829608s
	[INFO] 10.244.0.22:33517 - 4911 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001198121s
	[INFO] 10.244.0.22:47705 - 11986 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00195774s
	[INFO] 10.244.0.26:56156 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000197077s
	[INFO] 10.244.0.26:41560 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167249s
	[INFO] 10.244.0.30:34284 - 40355 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000210068s
	[INFO] 10.244.0.30:39999 - 39101 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000288694s
	[INFO] 10.244.0.30:50731 - 32944 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000164917s
	[INFO] 10.244.0.30:34784 - 41944 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000198155s
	[INFO] 10.244.0.30:52220 - 45584 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000115677s
	[INFO] 10.244.0.30:38238 - 61620 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00013696s
	[INFO] 10.244.0.30:46831 - 28986 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003200463s
	[INFO] 10.244.0.30:57930 - 16293 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004105628s
	[INFO] 10.244.0.30:59567 - 64148 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004112503s
	[INFO] 10.244.0.30:42685 - 32305 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00453714s
	[INFO] 10.244.0.30:35666 - 38868 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004437429s
	[INFO] 10.244.0.30:52779 - 63304 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005261607s
	[INFO] 10.244.0.30:54795 - 2576 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003871882s
	[INFO] 10.244.0.30:58002 - 41517 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004992712s
	[INFO] 10.244.0.30:49854 - 50435 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001687078s
	[INFO] 10.244.0.30:42898 - 40687 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001768954s
	
	
	==> describe nodes <==
	Name:               addons-341255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-341255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=addons-341255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T12_47_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-341255
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-341255"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 12:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-341255
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 12:52:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 12:52:13 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 12:52:13 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 12:52:13 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 12:52:13 +0000   Sun, 02 Nov 2025 12:48:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-341255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                82d5416c-386b-4580-893a-4c29b1676015
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-86bd5cbb97-qg8w6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  default                     hello-world-app-5d498dc89-xrfb9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-5bvt2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  gcp-auth                    gcp-auth-78565c9fb4-c6tbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-f7qb7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m37s
	  kube-system                 amd-gpu-device-plugin-kjxsc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 coredns-66bc5c9577-pvw29                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m39s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-dj5hr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-addons-341255                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m46s
	  kube-system                 kindnet-wsss9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m39s
	  kube-system                 kube-apiserver-addons-341255                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-341255        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-prdwm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-scheduler-addons-341255                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-85b7d694d7-gxjkw              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m38s
	  kube-system                 nvidia-device-plugin-daemonset-5g45d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 registry-6b586f9694-w59vr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 registry-creds-764b6fb674-xqr5t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 registry-proxy-2rjx9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-d8c66         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-lrxfs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  local-path-storage          local-path-provisioner-648f6765c9-9x2dm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-plk2f               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m38s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-341255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-341255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-341255 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s  node-controller  Node addons-341255 event: Registered Node addons-341255 in Controller
	  Normal  NodeReady                3m58s  kubelet          Node addons-341255 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023935] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.640330] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 2 12:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.052730] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023920] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +2.047704] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +4.031606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +8.511092] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[ +16.382292] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 12:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	
	
	==> etcd [16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08] <==
	{"level":"warn","ts":"2025-11-02T12:47:44.422487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.428558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.435313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.441142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.448949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.455113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.461804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.475284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.481208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.489117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.537996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:55.911111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:55.917220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.257462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.263769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.279576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.286600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:42.533898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.604566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T12:48:42.533989Z","caller":"traceutil/trace.go:172","msg":"trace[7521513] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"113.719761ms","start":"2025-11-02T12:48:42.420259Z","end":"2025-11-02T12:48:42.533978Z","steps":["trace[7521513] 'range keys from in-memory index tree'  (duration: 113.519179ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:48:42.644735Z","caller":"traceutil/trace.go:172","msg":"trace[1848461068] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"105.176239ms","start":"2025-11-02T12:48:42.539540Z","end":"2025-11-02T12:48:42.644716Z","steps":["trace[1848461068] 'process raft request'  (duration: 101.5453ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:48:52.789125Z","caller":"traceutil/trace.go:172","msg":"trace[1199722892] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"138.812201ms","start":"2025-11-02T12:48:52.650291Z","end":"2025-11-02T12:48:52.789103Z","steps":["trace[1199722892] 'process raft request'  (duration: 54.309757ms)","trace[1199722892] 'compare'  (duration: 84.265544ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-02T12:49:01.411687Z","caller":"traceutil/trace.go:172","msg":"trace[1827374070] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1163; }","duration":"134.668877ms","start":"2025-11-02T12:49:01.277001Z","end":"2025-11-02T12:49:01.411670Z","steps":["trace[1827374070] 'read index received'  (duration: 134.66276ms)","trace[1827374070] 'applied index is now lower than readState.Index'  (duration: 4.633µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-02T12:49:01.411779Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.768253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T12:49:01.411796Z","caller":"traceutil/trace.go:172","msg":"trace[828441304] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1129; }","duration":"134.797973ms","start":"2025-11-02T12:49:01.276993Z","end":"2025-11-02T12:49:01.411791Z","steps":["trace[828441304] 'agreement among raft nodes before linearized reading'  (duration: 134.74384ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:49:01.411835Z","caller":"traceutil/trace.go:172","msg":"trace[84704451] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"153.097575ms","start":"2025-11-02T12:49:01.258728Z","end":"2025-11-02T12:49:01.411825Z","steps":["trace[84704451] 'process raft request'  (duration: 152.977123ms)"],"step_count":1}
	
	
	==> gcp-auth [01ffcd58446f19ea5e6ba66051369a5d913a8ce4806cb716c4abe720de2e46c5] <==
	2025/11/02 12:49:08 GCP Auth Webhook started!
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	2025/11/02 12:50:08 Ready to marshal response ...
	2025/11/02 12:50:08 Ready to write response ...
	2025/11/02 12:50:11 Ready to marshal response ...
	2025/11/02 12:50:11 Ready to write response ...
	2025/11/02 12:50:11 Ready to marshal response ...
	2025/11/02 12:50:11 Ready to write response ...
	2025/11/02 12:50:14 Ready to marshal response ...
	2025/11/02 12:50:14 Ready to write response ...
	2025/11/02 12:50:18 Ready to marshal response ...
	2025/11/02 12:50:18 Ready to write response ...
	2025/11/02 12:50:29 Ready to marshal response ...
	2025/11/02 12:50:29 Ready to write response ...
	2025/11/02 12:50:55 Ready to marshal response ...
	2025/11/02 12:50:55 Ready to write response ...
	2025/11/02 12:52:31 Ready to marshal response ...
	2025/11/02 12:52:31 Ready to write response ...
	
	
	==> kernel <==
	 12:52:32 up 35 min,  0 user,  load average: 0.58, 1.00, 0.53
	Linux addons-341255 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3] <==
	I1102 12:50:24.264308       1 main.go:301] handling current node
	I1102 12:50:34.262459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:50:34.262495       1 main.go:301] handling current node
	I1102 12:50:44.262944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:50:44.262974       1 main.go:301] handling current node
	I1102 12:50:54.262387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:50:54.262414       1 main.go:301] handling current node
	I1102 12:51:04.262964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:04.263002       1 main.go:301] handling current node
	I1102 12:51:14.263002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:14.263033       1 main.go:301] handling current node
	I1102 12:51:24.263040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:24.263073       1 main.go:301] handling current node
	I1102 12:51:34.264391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:34.264429       1 main.go:301] handling current node
	I1102 12:51:44.267667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:44.267700       1 main.go:301] handling current node
	I1102 12:51:54.268446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:51:54.268485       1 main.go:301] handling current node
	I1102 12:52:04.262708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:52:04.262750       1 main.go:301] handling current node
	I1102 12:52:14.262611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:52:14.262645       1 main.go:301] handling current node
	I1102 12:52:24.271633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:52:24.271663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355] <==
	 > logger="UnhandledError"
	E1102 12:48:42.716330       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.165.64:443: connect: connection refused" logger="UnhandledError"
	W1102 12:48:43.717158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 12:48:43.717233       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1102 12:48:43.717159       1 handler_proxy.go:99] no RequestInfo found in the context
	I1102 12:48:43.717247       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1102 12:48:43.717265       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1102 12:48:43.718398       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1102 12:48:45.008494       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1102 12:48:47.723644       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 12:48:47.723757       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1102 12:48:47.723754       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1102 12:48:47.739877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1102 12:50:02.772175       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43096: use of closed network connection
	E1102 12:50:02.921801       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43132: use of closed network connection
	I1102 12:50:08.654459       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1102 12:50:08.840991       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.72.190"}
	I1102 12:50:38.578927       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1102 12:52:31.255277       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.115.71"}
	
	
	==> kube-controller-manager [4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94] <==
	I1102 12:47:52.242297       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 12:47:52.242474       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 12:47:52.242911       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 12:47:52.242945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 12:47:52.242961       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 12:47:52.243100       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 12:47:52.244160       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 12:47:52.244163       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 12:47:52.247842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 12:47:52.257608       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 12:47:52.257690       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 12:47:52.257736       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 12:47:52.257747       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 12:47:52.257754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 12:47:52.263664       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-341255" podCIDRs=["10.244.0.0/24"]
	I1102 12:47:52.264613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1102 12:47:54.771374       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1102 12:48:22.251989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1102 12:48:22.252123       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1102 12:48:22.252159       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1102 12:48:22.270908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1102 12:48:22.273889       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 12:48:22.353027       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 12:48:22.374372       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 12:48:37.249189       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47] <==
	I1102 12:47:53.827218       1 server_linux.go:53] "Using iptables proxy"
	I1102 12:47:54.019048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 12:47:54.170645       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 12:47:54.171242       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 12:47:54.187734       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 12:47:54.495023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 12:47:54.495103       1 server_linux.go:132] "Using iptables Proxier"
	I1102 12:47:54.590678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 12:47:54.606393       1 server.go:527] "Version info" version="v1.34.1"
	I1102 12:47:54.606438       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:47:54.609871       1 config.go:200] "Starting service config controller"
	I1102 12:47:54.609943       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 12:47:54.609995       1 config.go:106] "Starting endpoint slice config controller"
	I1102 12:47:54.610023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 12:47:54.610056       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 12:47:54.610080       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 12:47:54.610452       1 config.go:309] "Starting node config controller"
	I1102 12:47:54.610478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 12:47:54.710982       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 12:47:54.712630       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 12:47:54.712648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 12:47:54.712656       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d] <==
	I1102 12:47:45.329814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:47:45.330078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 12:47:45.330137       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 12:47:45.330961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 12:47:45.331783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 12:47:45.331852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:47:45.331942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 12:47:45.332009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 12:47:45.332124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 12:47:45.332278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 12:47:45.332340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 12:47:45.332393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 12:47:45.332451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 12:47:45.332506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 12:47:45.332519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 12:47:45.332593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 12:47:45.332687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 12:47:45.332720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 12:47:45.332825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 12:47:45.332935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 12:47:45.332998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 12:47:45.333166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 12:47:46.141521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 12:47:46.171597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1102 12:47:46.730609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 12:50:55 addons-341255 kubelet[1283]: I1102 12:50:55.582037    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqzl\" (UniqueName: \"kubernetes.io/projected/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-kube-api-access-rmqzl\") pod \"task-pv-pod-restore\" (UID: \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\") " pod="default/task-pv-pod-restore"
	Nov 02 12:50:55 addons-341255 kubelet[1283]: I1102 12:50:55.687526    1283 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-f9167c4a-414c-4d3d-aada-3c60bc5efb77\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc\") pod \"task-pv-pod-restore\" (UID: \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/8e9594fb9fb882db2487cd483a1239666efaad29a97163db9d47a9da04a482f6/globalmount\"" pod="default/task-pv-pod-restore"
	Nov 02 12:50:57 addons-341255 kubelet[1283]: I1102 12:50:57.072027    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.845094971 podStartE2EDuration="2.072012094s" podCreationTimestamp="2025-11-02 12:50:55 +0000 UTC" firstStartedPulling="2025-11-02 12:50:55.781908794 +0000 UTC m=+188.553862944" lastFinishedPulling="2025-11-02 12:50:56.008825908 +0000 UTC m=+188.780780067" observedRunningTime="2025-11-02 12:50:57.070793984 +0000 UTC m=+189.842748152" watchObservedRunningTime="2025-11-02 12:50:57.072012094 +0000 UTC m=+189.843966260"
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.836462    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqzl\" (UniqueName: \"kubernetes.io/projected/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-kube-api-access-rmqzl\") pod \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\" (UID: \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\") "
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.836553    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-gcp-creds\") pod \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\" (UID: \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\") "
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.836698    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc\") pod \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\" (UID: \"5f2f3ef7-aa5f-418b-96d2-60c3b03715d4\") "
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.836693    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4" (UID: "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.839312    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-kube-api-access-rmqzl" (OuterVolumeSpecName: "kube-api-access-rmqzl") pod "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4" (UID: "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4"). InnerVolumeSpecName "kube-api-access-rmqzl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.840248    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc" (OuterVolumeSpecName: "task-pv-storage") pod "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4" (UID: "5f2f3ef7-aa5f-418b-96d2-60c3b03715d4"). InnerVolumeSpecName "pvc-f9167c4a-414c-4d3d-aada-3c60bc5efb77". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.937799    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmqzl\" (UniqueName: \"kubernetes.io/projected/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-kube-api-access-rmqzl\") on node \"addons-341255\" DevicePath \"\""
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.937840    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4-gcp-creds\") on node \"addons-341255\" DevicePath \"\""
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.937881    1283 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f9167c4a-414c-4d3d-aada-3c60bc5efb77\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc\") on node \"addons-341255\" "
	Nov 02 12:51:02 addons-341255 kubelet[1283]: I1102 12:51:02.942247    1283 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f9167c4a-414c-4d3d-aada-3c60bc5efb77" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc") on node "addons-341255"
	Nov 02 12:51:03 addons-341255 kubelet[1283]: I1102 12:51:03.038455    1283 reconciler_common.go:299] "Volume detached for volume \"pvc-f9167c4a-414c-4d3d-aada-3c60bc5efb77\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9243f626-b7ea-11f0-94d2-6a9a2bb2ebcc\") on node \"addons-341255\" DevicePath \"\""
	Nov 02 12:51:03 addons-341255 kubelet[1283]: I1102 12:51:03.086310    1283 scope.go:117] "RemoveContainer" containerID="5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293"
	Nov 02 12:51:03 addons-341255 kubelet[1283]: I1102 12:51:03.096460    1283 scope.go:117] "RemoveContainer" containerID="5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293"
	Nov 02 12:51:03 addons-341255 kubelet[1283]: E1102 12:51:03.096873    1283 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293\": container with ID starting with 5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293 not found: ID does not exist" containerID="5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293"
	Nov 02 12:51:03 addons-341255 kubelet[1283]: I1102 12:51:03.096920    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293"} err="failed to get container status \"5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293\": rpc error: code = NotFound desc = could not find container \"5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293\": container with ID starting with 5b29dce3615676f15db0b248321d72a59100e2dd229e4dc0cbba42dde3174293 not found: ID does not exist"
	Nov 02 12:51:03 addons-341255 kubelet[1283]: I1102 12:51:03.324548    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2f3ef7-aa5f-418b-96d2-60c3b03715d4" path="/var/lib/kubelet/pods/5f2f3ef7-aa5f-418b-96d2-60c3b03715d4/volumes"
	Nov 02 12:51:20 addons-341255 kubelet[1283]: I1102 12:51:20.320655    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kjxsc" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 12:51:37 addons-341255 kubelet[1283]: I1102 12:51:37.321948    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2rjx9" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 12:51:40 addons-341255 kubelet[1283]: I1102 12:51:40.320596    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5g45d" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 12:52:31 addons-341255 kubelet[1283]: I1102 12:52:31.273914    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/70c5182f-d5af-4056-8632-886cf1576d53-gcp-creds\") pod \"hello-world-app-5d498dc89-xrfb9\" (UID: \"70c5182f-d5af-4056-8632-886cf1576d53\") " pod="default/hello-world-app-5d498dc89-xrfb9"
	Nov 02 12:52:31 addons-341255 kubelet[1283]: I1102 12:52:31.274001    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jjz\" (UniqueName: \"kubernetes.io/projected/70c5182f-d5af-4056-8632-886cf1576d53-kube-api-access-j4jjz\") pod \"hello-world-app-5d498dc89-xrfb9\" (UID: \"70c5182f-d5af-4056-8632-886cf1576d53\") " pod="default/hello-world-app-5d498dc89-xrfb9"
	Nov 02 12:52:32 addons-341255 kubelet[1283]: I1102 12:52:32.423779    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-xrfb9" podStartSLOduration=1.035603194 podStartE2EDuration="1.423757814s" podCreationTimestamp="2025-11-02 12:52:31 +0000 UTC" firstStartedPulling="2025-11-02 12:52:31.51265429 +0000 UTC m=+284.284608435" lastFinishedPulling="2025-11-02 12:52:31.900808894 +0000 UTC m=+284.672763055" observedRunningTime="2025-11-02 12:52:32.422829955 +0000 UTC m=+285.194784123" watchObservedRunningTime="2025-11-02 12:52:32.423757814 +0000 UTC m=+285.195711981"
	
	
	==> storage-provisioner [157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3] <==
	W1102 12:52:08.094603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:10.097601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:10.101947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:12.104322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:12.107655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:14.110489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:14.113888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:16.116940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:16.121392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:18.125150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:18.128879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:20.131145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:20.135710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:22.138268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:22.145106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:24.148010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:24.151522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:26.154593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:26.159330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:28.162121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:28.165466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:30.168477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:30.172181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:32.174828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:52:32.179128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-341255 -n addons-341255
helpers_test.go:269: (dbg) Run:  kubectl --context addons-341255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs: exit status 1 (53.763978ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6nwnq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-28fhs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (238.274403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:52:33.647453   29268 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:52:33.647756   29268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:52:33.647766   29268 out.go:374] Setting ErrFile to fd 2...
	I1102 12:52:33.647770   29268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:52:33.647980   29268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:52:33.648221   29268 mustload.go:66] Loading cluster: addons-341255
	I1102 12:52:33.648613   29268 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:52:33.648632   29268 addons.go:607] checking whether the cluster is paused
	I1102 12:52:33.648768   29268 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:52:33.648793   29268 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:52:33.649346   29268 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:52:33.667625   29268 ssh_runner.go:195] Run: systemctl --version
	I1102 12:52:33.667687   29268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:52:33.684637   29268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:52:33.783114   29268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:52:33.783177   29268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:52:33.811098   29268 cri.go:89] found id: "b7219056e0b6b15ef34e6a47ce13664397166b10f2966fcdb27adff98f40df44"
	I1102 12:52:33.811123   29268 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:52:33.811130   29268 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:52:33.811135   29268 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:52:33.811140   29268 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:52:33.811145   29268 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:52:33.811155   29268 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:52:33.811160   29268 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:52:33.811165   29268 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:52:33.811173   29268 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:52:33.811178   29268 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:52:33.811182   29268 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:52:33.811191   29268 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:52:33.811195   29268 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:52:33.811214   29268 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:52:33.811224   29268 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:52:33.811227   29268 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:52:33.811231   29268 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:52:33.811233   29268 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:52:33.811235   29268 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:52:33.811238   29268 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:52:33.811240   29268 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:52:33.811242   29268 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:52:33.811245   29268 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:52:33.811247   29268 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:52:33.811250   29268 cri.go:89] found id: ""
	I1102 12:52:33.811292   29268 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:52:33.825263   29268 out.go:203] 
	W1102 12:52:33.826350   29268 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:52:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:52:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:52:33.826365   29268 out.go:285] * 
	* 
	W1102 12:52:33.829343   29268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:52:33.830503   29268 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable ingress --alsologtostderr -v=1: exit status 11 (240.29523ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:52:33.886848   29330 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:52:33.887122   29330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:52:33.887132   29330 out.go:374] Setting ErrFile to fd 2...
	I1102 12:52:33.887136   29330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:52:33.887366   29330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:52:33.887639   29330 mustload.go:66] Loading cluster: addons-341255
	I1102 12:52:33.887939   29330 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:52:33.887953   29330 addons.go:607] checking whether the cluster is paused
	I1102 12:52:33.888033   29330 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:52:33.888054   29330 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:52:33.888398   29330 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:52:33.906467   29330 ssh_runner.go:195] Run: systemctl --version
	I1102 12:52:33.906514   29330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:52:33.924456   29330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:52:34.021966   29330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:52:34.022055   29330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:52:34.051686   29330 cri.go:89] found id: "b7219056e0b6b15ef34e6a47ce13664397166b10f2966fcdb27adff98f40df44"
	I1102 12:52:34.051713   29330 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:52:34.051719   29330 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:52:34.051724   29330 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:52:34.051729   29330 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:52:34.051734   29330 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:52:34.051739   29330 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:52:34.051745   29330 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:52:34.051749   29330 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:52:34.051770   29330 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:52:34.051778   29330 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:52:34.051780   29330 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:52:34.051782   29330 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:52:34.051785   29330 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:52:34.051788   29330 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:52:34.051795   29330 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:52:34.051800   29330 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:52:34.051804   29330 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:52:34.051807   29330 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:52:34.051809   29330 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:52:34.051812   29330 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:52:34.051814   29330 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:52:34.051816   29330 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:52:34.051819   29330 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:52:34.051821   29330 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:52:34.051823   29330 cri.go:89] found id: ""
	I1102 12:52:34.051859   29330 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:52:34.065278   29330 out.go:203] 
	W1102 12:52:34.066528   29330 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:52:34.066544   29330 out.go:285] * 
	* 
	W1102 12:52:34.069486   29330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:52:34.070672   29330 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5bvt2" [6f4a5026-3ed8-4201-bf01-c4302378e139] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00347529s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (238.683387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:10.740670   24670 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:10.740942   24670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:10.740951   24670 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:10.740955   24670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:10.741131   24670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:10.741378   24670 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:10.741733   24670 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:10.741747   24670 addons.go:607] checking whether the cluster is paused
	I1102 12:50:10.741826   24670 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:10.741840   24670 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:10.742176   24670 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:10.759721   24670 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:10.759775   24670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:10.776273   24670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:10.874045   24670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:10.874144   24670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:10.903043   24670 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:10.903080   24670 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:10.903085   24670 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:10.903088   24670 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:10.903091   24670 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:10.903095   24670 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:10.903098   24670 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:10.903100   24670 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:10.903102   24670 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:10.903117   24670 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:10.903120   24670 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:10.903122   24670 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:10.903124   24670 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:10.903127   24670 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:10.903129   24670 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:10.903135   24670 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:10.903139   24670 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:10.903143   24670 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:10.903145   24670 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:10.903148   24670 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:10.903150   24670 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:10.903152   24670 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:10.903154   24670 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:10.903157   24670 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:10.903159   24670 cri.go:89] found id: ""
	I1102 12:50:10.903206   24670 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:10.916790   24670 out.go:203] 
	W1102 12:50:10.918077   24670 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:10.918096   24670 out.go:285] * 
	* 
	W1102 12:50:10.921199   24670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:10.922383   24670 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.074296ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00289324s
addons_test.go:463: (dbg) Run:  kubectl --context addons-341255 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.139683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:09.289697   24318 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:09.289837   24318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:09.289847   24318 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:09.289852   24318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:09.290073   24318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:09.290328   24318 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:09.290682   24318 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:09.290697   24318 addons.go:607] checking whether the cluster is paused
	I1102 12:50:09.290779   24318 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:09.290796   24318 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:09.291163   24318 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:09.309223   24318 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:09.309276   24318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:09.327058   24318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:09.425170   24318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:09.425246   24318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:09.456363   24318 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:09.456391   24318 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:09.456395   24318 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:09.456399   24318 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:09.456404   24318 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:09.456409   24318 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:09.456413   24318 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:09.456417   24318 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:09.456421   24318 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:09.456436   24318 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:09.456441   24318 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:09.456445   24318 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:09.456450   24318 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:09.456463   24318 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:09.456466   24318 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:09.456487   24318 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:09.456495   24318 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:09.456501   24318 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:09.456505   24318 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:09.456509   24318 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:09.456518   24318 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:09.456526   24318 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:09.456531   24318 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:09.456537   24318 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:09.456541   24318 cri.go:89] found id: ""
	I1102 12:50:09.456628   24318 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:09.473949   24318 out.go:203] 
	W1102 12:50:09.475442   24318 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:09.475466   24318 out.go:285] * 
	* 
	W1102 12:50:09.479764   24318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:09.480580   24318 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1102 12:50:15.236047   12914 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1102 12:50:15.239008   12914 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1102 12:50:15.239030   12914 kapi.go:107] duration metric: took 3.006923ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.015503ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-341255 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-341255 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c16e3024-646c-46a4-a131-3daa3713a675] Pending
helpers_test.go:352: "task-pv-pod" [c16e3024-646c-46a4-a131-3daa3713a675] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c16e3024-646c-46a4-a131-3daa3713a675] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003410067s
addons_test.go:572: (dbg) Run:  kubectl --context addons-341255 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-341255 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-341255 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-341255 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-341255 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-341255 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-341255 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [5f2f3ef7-aa5f-418b-96d2-60c3b03715d4] Pending
helpers_test.go:352: "task-pv-pod-restore" [5f2f3ef7-aa5f-418b-96d2-60c3b03715d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [5f2f3ef7-aa5f-418b-96d2-60c3b03715d4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003353264s
addons_test.go:614: (dbg) Run:  kubectl --context addons-341255 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-341255 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-341255 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (241.386291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:51:03.477490   27284 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:51:03.477815   27284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:51:03.477826   27284 out.go:374] Setting ErrFile to fd 2...
	I1102 12:51:03.477833   27284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:51:03.478026   27284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:51:03.478315   27284 mustload.go:66] Loading cluster: addons-341255
	I1102 12:51:03.478654   27284 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:51:03.478669   27284 addons.go:607] checking whether the cluster is paused
	I1102 12:51:03.478767   27284 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:51:03.478788   27284 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:51:03.479174   27284 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:51:03.496638   27284 ssh_runner.go:195] Run: systemctl --version
	I1102 12:51:03.496690   27284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:51:03.514059   27284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:51:03.613317   27284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:51:03.613427   27284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:51:03.641696   27284 cri.go:89] found id: "b7219056e0b6b15ef34e6a47ce13664397166b10f2966fcdb27adff98f40df44"
	I1102 12:51:03.641729   27284 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:51:03.641734   27284 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:51:03.641738   27284 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:51:03.641741   27284 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:51:03.641745   27284 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:51:03.641747   27284 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:51:03.641750   27284 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:51:03.641753   27284 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:51:03.641763   27284 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:51:03.641765   27284 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:51:03.641768   27284 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:51:03.641771   27284 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:51:03.641773   27284 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:51:03.641776   27284 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:51:03.641787   27284 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:51:03.641794   27284 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:51:03.641798   27284 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:51:03.641801   27284 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:51:03.641803   27284 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:51:03.641805   27284 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:51:03.641807   27284 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:51:03.641810   27284 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:51:03.641812   27284 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:51:03.641816   27284 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:51:03.641821   27284 cri.go:89] found id: ""
	I1102 12:51:03.641878   27284 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:51:03.656191   27284 out.go:203] 
	W1102 12:51:03.657682   27284 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:51:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:51:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:51:03.657708   27284 out.go:285] * 
	* 
	W1102 12:51:03.660778   27284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:51:03.662369   27284 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.290721ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:51:03.720022   27345 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:51:03.720305   27345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:51:03.720316   27345 out.go:374] Setting ErrFile to fd 2...
	I1102 12:51:03.720320   27345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:51:03.720545   27345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:51:03.720808   27345 mustload.go:66] Loading cluster: addons-341255
	I1102 12:51:03.721145   27345 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:51:03.721160   27345 addons.go:607] checking whether the cluster is paused
	I1102 12:51:03.721249   27345 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:51:03.721265   27345 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:51:03.721657   27345 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:51:03.739531   27345 ssh_runner.go:195] Run: systemctl --version
	I1102 12:51:03.739599   27345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:51:03.756514   27345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:51:03.855279   27345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:51:03.855389   27345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:51:03.885436   27345 cri.go:89] found id: "b7219056e0b6b15ef34e6a47ce13664397166b10f2966fcdb27adff98f40df44"
	I1102 12:51:03.885463   27345 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:51:03.885469   27345 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:51:03.885472   27345 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:51:03.885476   27345 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:51:03.885480   27345 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:51:03.885483   27345 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:51:03.885487   27345 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:51:03.885502   27345 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:51:03.885511   27345 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:51:03.885516   27345 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:51:03.885520   27345 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:51:03.885524   27345 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:51:03.885528   27345 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:51:03.885535   27345 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:51:03.885541   27345 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:51:03.885549   27345 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:51:03.885561   27345 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:51:03.885576   27345 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:51:03.885580   27345 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:51:03.885583   27345 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:51:03.885586   27345 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:51:03.885590   27345 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:51:03.885593   27345 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:51:03.885597   27345 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:51:03.885600   27345 cri.go:89] found id: ""
	I1102 12:51:03.885653   27345 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:51:03.902173   27345 out.go:203] 
	W1102 12:51:03.903333   27345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:51:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:51:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:51:03.903353   27345 out.go:285] * 
	* 
	W1102 12:51:03.906368   27345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:51:03.907581   27345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-341255 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-341255 --alsologtostderr -v=1: exit status 11 (246.79874ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:03.226816   23106 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:03.227211   23106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:03.227224   23106 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:03.227230   23106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:03.227490   23106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:03.227845   23106 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:03.228294   23106 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:03.228314   23106 addons.go:607] checking whether the cluster is paused
	I1102 12:50:03.228484   23106 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:03.228509   23106 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:03.229109   23106 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:03.247501   23106 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:03.247578   23106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:03.265459   23106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:03.364051   23106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:03.364143   23106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:03.392823   23106 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:03.392843   23106 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:03.392847   23106 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:03.392851   23106 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:03.392853   23106 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:03.392857   23106 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:03.392859   23106 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:03.392862   23106 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:03.392864   23106 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:03.392870   23106 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:03.392872   23106 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:03.392875   23106 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:03.392877   23106 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:03.392880   23106 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:03.392883   23106 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:03.392889   23106 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:03.392901   23106 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:03.392905   23106 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:03.392907   23106 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:03.392910   23106 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:03.392912   23106 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:03.392915   23106 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:03.392917   23106 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:03.392920   23106 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:03.392922   23106 cri.go:89] found id: ""
	I1102 12:50:03.392960   23106 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:03.406774   23106 out.go:203] 
	W1102 12:50:03.408372   23106 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:03.408392   23106 out.go:285] * 
	* 
	W1102 12:50:03.411303   23106 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:03.412627   23106 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-341255 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-341255
helpers_test.go:243: (dbg) docker inspect addons-341255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec",
	        "Created": "2025-11-02T12:47:34.624656749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T12:47:34.658243617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/hostname",
	        "HostsPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/hosts",
	        "LogPath": "/var/lib/docker/containers/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec/29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec-json.log",
	        "Name": "/addons-341255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-341255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-341255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "29b8f38f8195a5aa24733d9e8fe96bade9dcd7e6b0bceacb3f43e74c1170dcec",
	                "LowerDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a7d7d6377799f36cb673230eaf0e07f2312d8e576987459c97616d49a041600/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-341255",
	                "Source": "/var/lib/docker/volumes/addons-341255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-341255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-341255",
	                "name.minikube.sigs.k8s.io": "addons-341255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4d1397366a860695a2410391510b199369552311154ae2bc32a86e1e8a53e10",
	            "SandboxKey": "/var/run/docker/netns/e4d1397366a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-341255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:6a:18:ea:88:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e8d6efd86cb40045fd0347c60946cae75d49fbfd2c9b2e46da512cdb65f1946b",
	                    "EndpointID": "883d6fe78cfc3269fef654e0fe55892390369f64cccd73e3cae3bc2c05a2fc25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-341255",
	                        "29b8f38f8195"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-341255 -n addons-341255
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-341255 logs -n 25: (1.106276883s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-793938 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-793938   │ jenkins │ v1.37.0 │ 02 Nov 25 12:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ delete  │ -p download-only-793938                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-793938   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ start   │ -o=json --download-only -p download-only-537260 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-537260   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ delete  │ -p download-only-537260                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-537260   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ delete  │ -p download-only-793938                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-793938   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ delete  │ -p download-only-537260                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-537260   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ start   │ --download-only -p download-docker-117507 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-117507 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ delete  │ -p download-docker-117507                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-117507 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ start   │ --download-only -p binary-mirror-594380 --alsologtostderr --binary-mirror http://127.0.0.1:44217 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-594380   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ delete  │ -p binary-mirror-594380                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-594380   │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ addons  │ enable dashboard -p addons-341255                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-341255                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	│ start   │ -p addons-341255 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:49 UTC │
	│ addons  │ addons-341255 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:49 UTC │                     │
	│ addons  │ addons-341255 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-341255 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-341255          │ jenkins │ v1.37.0 │ 02 Nov 25 12:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 12:47:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 12:47:10.250973   14235 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:47:10.251260   14235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:10.251270   14235 out.go:374] Setting ErrFile to fd 2...
	I1102 12:47:10.251274   14235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:10.251443   14235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:47:10.251935   14235 out.go:368] Setting JSON to false
	I1102 12:47:10.252856   14235 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1782,"bootTime":1762085848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:47:10.252939   14235 start.go:143] virtualization: kvm guest
	I1102 12:47:10.254846   14235 out.go:179] * [addons-341255] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 12:47:10.256214   14235 notify.go:221] Checking for updates...
	I1102 12:47:10.256241   14235 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 12:47:10.257444   14235 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:47:10.258968   14235 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:47:10.260422   14235 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:47:10.261701   14235 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 12:47:10.263102   14235 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 12:47:10.264485   14235 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:47:10.287640   14235 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:47:10.287744   14235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:10.342495   14235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-02 12:47:10.333417346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:10.342620   14235 docker.go:319] overlay module found
	I1102 12:47:10.344275   14235 out.go:179] * Using the docker driver based on user configuration
	I1102 12:47:10.345411   14235 start.go:309] selected driver: docker
	I1102 12:47:10.345430   14235 start.go:930] validating driver "docker" against <nil>
	I1102 12:47:10.345439   14235 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 12:47:10.346016   14235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:10.397240   14235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-02 12:47:10.388322177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:10.397391   14235 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 12:47:10.397649   14235 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 12:47:10.399420   14235 out.go:179] * Using Docker driver with root privileges
	I1102 12:47:10.400658   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:10.400718   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:10.400728   14235 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 12:47:10.400787   14235 start.go:353] cluster config:
	{Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1102 12:47:10.402196   14235 out.go:179] * Starting "addons-341255" primary control-plane node in "addons-341255" cluster
	I1102 12:47:10.403533   14235 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 12:47:10.404890   14235 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 12:47:10.406200   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:10.406237   14235 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 12:47:10.406243   14235 cache.go:59] Caching tarball of preloaded images
	I1102 12:47:10.406310   14235 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 12:47:10.406310   14235 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 12:47:10.406321   14235 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 12:47:10.406675   14235 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json ...
	I1102 12:47:10.406700   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json: {Name:mk8cc4f6201cd536994d4ff0636752c655b01dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:10.423274   14235 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 12:47:10.423394   14235 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 12:47:10.423415   14235 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1102 12:47:10.423423   14235 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1102 12:47:10.423430   14235 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1102 12:47:10.423436   14235 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1102 12:47:22.860304   14235 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1102 12:47:22.860358   14235 cache.go:233] Successfully downloaded all kic artifacts
	I1102 12:47:22.860397   14235 start.go:360] acquireMachinesLock for addons-341255: {Name:mkf563c157e84d426caa00e0d150636e69ae60c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 12:47:22.860487   14235 start.go:364] duration metric: took 71.877µs to acquireMachinesLock for "addons-341255"
	I1102 12:47:22.860511   14235 start.go:93] Provisioning new machine with config: &{Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 12:47:22.860608   14235 start.go:125] createHost starting for "" (driver="docker")
	I1102 12:47:22.862286   14235 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1102 12:47:22.862496   14235 start.go:159] libmachine.API.Create for "addons-341255" (driver="docker")
	I1102 12:47:22.862525   14235 client.go:173] LocalClient.Create starting
	I1102 12:47:22.862639   14235 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 12:47:23.190130   14235 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 12:47:23.235220   14235 cli_runner.go:164] Run: docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 12:47:23.252356   14235 cli_runner.go:211] docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 12:47:23.252418   14235 network_create.go:284] running [docker network inspect addons-341255] to gather additional debugging logs...
	I1102 12:47:23.252436   14235 cli_runner.go:164] Run: docker network inspect addons-341255
	W1102 12:47:23.268251   14235 cli_runner.go:211] docker network inspect addons-341255 returned with exit code 1
	I1102 12:47:23.268278   14235 network_create.go:287] error running [docker network inspect addons-341255]: docker network inspect addons-341255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-341255 not found
	I1102 12:47:23.268300   14235 network_create.go:289] output of [docker network inspect addons-341255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-341255 not found
	
	** /stderr **
	I1102 12:47:23.268463   14235 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 12:47:23.284913   14235 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ceacf0}
	I1102 12:47:23.284956   14235 network_create.go:124] attempt to create docker network addons-341255 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1102 12:47:23.285003   14235 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-341255 addons-341255
	I1102 12:47:23.340652   14235 network_create.go:108] docker network addons-341255 192.168.49.0/24 created
	I1102 12:47:23.340681   14235 kic.go:121] calculated static IP "192.168.49.2" for the "addons-341255" container
	I1102 12:47:23.340732   14235 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 12:47:23.356328   14235 cli_runner.go:164] Run: docker volume create addons-341255 --label name.minikube.sigs.k8s.io=addons-341255 --label created_by.minikube.sigs.k8s.io=true
	I1102 12:47:23.373686   14235 oci.go:103] Successfully created a docker volume addons-341255
	I1102 12:47:23.373752   14235 cli_runner.go:164] Run: docker run --rm --name addons-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --entrypoint /usr/bin/test -v addons-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 12:47:30.226022   14235 cli_runner.go:217] Completed: docker run --rm --name addons-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --entrypoint /usr/bin/test -v addons-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.85222317s)
	I1102 12:47:30.226066   14235 oci.go:107] Successfully prepared a docker volume addons-341255
	I1102 12:47:30.226104   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:30.226127   14235 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 12:47:30.226198   14235 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 12:47:34.550982   14235 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.324748588s)
	I1102 12:47:34.551010   14235 kic.go:203] duration metric: took 4.324880929s to extract preloaded images to volume ...
	W1102 12:47:34.551110   14235 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 12:47:34.551142   14235 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 12:47:34.551176   14235 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 12:47:34.609448   14235 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-341255 --name addons-341255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-341255 --network addons-341255 --ip 192.168.49.2 --volume addons-341255:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 12:47:34.892001   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Running}}
	I1102 12:47:34.910209   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:34.927310   14235 cli_runner.go:164] Run: docker exec addons-341255 stat /var/lib/dpkg/alternatives/iptables
	I1102 12:47:34.977823   14235 oci.go:144] the created container "addons-341255" has a running status.
	I1102 12:47:34.977856   14235 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa...
	I1102 12:47:35.288151   14235 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 12:47:35.314349   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:35.332530   14235 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 12:47:35.332553   14235 kic_runner.go:114] Args: [docker exec --privileged addons-341255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 12:47:35.379067   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:35.396410   14235 machine.go:94] provisionDockerMachine start ...
	I1102 12:47:35.396504   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.412963   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.413305   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.413329   14235 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 12:47:35.554407   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-341255
	
	I1102 12:47:35.554431   14235 ubuntu.go:182] provisioning hostname "addons-341255"
	I1102 12:47:35.554491   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.572293   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.572588   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.572609   14235 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-341255 && echo "addons-341255" | sudo tee /etc/hostname
	I1102 12:47:35.718544   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-341255
	
	I1102 12:47:35.718655   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:35.736541   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:35.736751   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:35.736780   14235 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-341255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-341255/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-341255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 12:47:35.873883   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 12:47:35.873908   14235 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 12:47:35.873932   14235 ubuntu.go:190] setting up certificates
	I1102 12:47:35.873947   14235 provision.go:84] configureAuth start
	I1102 12:47:35.873996   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:35.891130   14235 provision.go:143] copyHostCerts
	I1102 12:47:35.891207   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 12:47:35.891320   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 12:47:35.891384   14235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 12:47:35.891436   14235 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.addons-341255 san=[127.0.0.1 192.168.49.2 addons-341255 localhost minikube]
	I1102 12:47:36.382190   14235 provision.go:177] copyRemoteCerts
	I1102 12:47:36.382245   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 12:47:36.382279   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.398892   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:36.497344   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 12:47:36.515470   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1102 12:47:36.531754   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 12:47:36.547973   14235 provision.go:87] duration metric: took 674.01494ms to configureAuth
	I1102 12:47:36.548000   14235 ubuntu.go:206] setting minikube options for container-runtime
	I1102 12:47:36.548143   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:36.548241   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.564916   14235 main.go:143] libmachine: Using SSH client type: native
	I1102 12:47:36.565119   14235 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1102 12:47:36.565136   14235 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 12:47:36.811352   14235 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 12:47:36.811379   14235 machine.go:97] duration metric: took 1.414948314s to provisionDockerMachine
	I1102 12:47:36.811392   14235 client.go:176] duration metric: took 13.948860905s to LocalClient.Create
	I1102 12:47:36.811414   14235 start.go:167] duration metric: took 13.948919577s to libmachine.API.Create "addons-341255"
	I1102 12:47:36.811421   14235 start.go:293] postStartSetup for "addons-341255" (driver="docker")
	I1102 12:47:36.811433   14235 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 12:47:36.811505   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 12:47:36.811552   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.829245   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:36.930618   14235 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 12:47:36.934088   14235 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 12:47:36.934116   14235 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 12:47:36.934127   14235 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 12:47:36.934177   14235 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 12:47:36.934202   14235 start.go:296] duration metric: took 122.776223ms for postStartSetup
	I1102 12:47:36.934473   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:36.951179   14235 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/config.json ...
	I1102 12:47:36.951435   14235 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 12:47:36.951472   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:36.969135   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.064397   14235 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 12:47:37.068751   14235 start.go:128] duration metric: took 14.208129897s to createHost
	I1102 12:47:37.068776   14235 start.go:83] releasing machines lock for "addons-341255", held for 14.208276854s
	I1102 12:47:37.068845   14235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341255
	I1102 12:47:37.086200   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 12:47:37.086248   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 12:47:37.086272   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 12:47:37.086295   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	W1102 12:47:37.086358   14235 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: no such file or directory
	I1102 12:47:37.086411   14235 ssh_runner.go:195] Run: cat /version.json
	I1102 12:47:37.086444   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:37.086508   14235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 12:47:37.086582   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:37.105842   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.106245   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:37.257171   14235 ssh_runner.go:195] Run: systemctl --version
	I1102 12:47:37.263197   14235 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 12:47:37.294370   14235 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 12:47:37.298739   14235 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 12:47:37.298790   14235 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 12:47:37.323638   14235 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 12:47:37.323665   14235 start.go:496] detecting cgroup driver to use...
	I1102 12:47:37.323697   14235 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 12:47:37.323739   14235 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 12:47:37.338687   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 12:47:37.350279   14235 docker.go:218] disabling cri-docker service (if available) ...
	I1102 12:47:37.350336   14235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 12:47:37.366364   14235 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 12:47:37.383198   14235 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 12:47:37.461758   14235 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 12:47:37.546986   14235 docker.go:234] disabling docker service ...
	I1102 12:47:37.547052   14235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 12:47:37.564406   14235 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 12:47:37.576538   14235 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 12:47:37.659080   14235 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 12:47:37.739432   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 12:47:37.751738   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 12:47:37.765160   14235 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 12:47:37.765210   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.774895   14235 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 12:47:37.774951   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.783540   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.792240   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.800981   14235 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 12:47:37.808871   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.817512   14235 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.830758   14235 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 12:47:37.839311   14235 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 12:47:37.846795   14235 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1102 12:47:37.846853   14235 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1102 12:47:37.858888   14235 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 12:47:37.867623   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:37.945257   14235 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 12:47:38.046677   14235 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 12:47:38.046754   14235 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 12:47:38.050653   14235 start.go:564] Will wait 60s for crictl version
	I1102 12:47:38.050712   14235 ssh_runner.go:195] Run: which crictl
	I1102 12:47:38.054226   14235 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 12:47:38.078417   14235 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 12:47:38.078540   14235 ssh_runner.go:195] Run: crio --version
	I1102 12:47:38.105053   14235 ssh_runner.go:195] Run: crio --version
	I1102 12:47:38.133361   14235 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 12:47:38.134434   14235 cli_runner.go:164] Run: docker network inspect addons-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 12:47:38.150664   14235 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1102 12:47:38.154548   14235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 12:47:38.164365   14235 kubeadm.go:884] updating cluster {Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 12:47:38.164476   14235 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 12:47:38.164523   14235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 12:47:38.193799   14235 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 12:47:38.193818   14235 crio.go:433] Images already preloaded, skipping extraction
	I1102 12:47:38.193858   14235 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 12:47:38.218494   14235 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 12:47:38.218516   14235 cache_images.go:86] Images are preloaded, skipping loading
	I1102 12:47:38.218524   14235 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1102 12:47:38.218634   14235 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-341255 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 12:47:38.218713   14235 ssh_runner.go:195] Run: crio config
	I1102 12:47:38.261174   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:38.261193   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:38.261205   14235 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 12:47:38.261226   14235 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-341255 NodeName:addons-341255 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 12:47:38.261352   14235 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-341255"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 12:47:38.261411   14235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 12:47:38.269401   14235 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 12:47:38.269460   14235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 12:47:38.277052   14235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1102 12:47:38.289179   14235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 12:47:38.304119   14235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1102 12:47:38.316420   14235 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1102 12:47:38.319840   14235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 12:47:38.329295   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:38.410137   14235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 12:47:38.435055   14235 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255 for IP: 192.168.49.2
	I1102 12:47:38.435077   14235 certs.go:195] generating shared ca certs ...
	I1102 12:47:38.435097   14235 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.435232   14235 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 12:47:38.769624   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt ...
	I1102 12:47:38.769661   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt: {Name:mkd7e9806d5c59b627e491ddb10238af7d2db0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.769833   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key ...
	I1102 12:47:38.769845   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key: {Name:mka520025ec19fdfd442874b83fab35cad4035b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.769917   14235 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 12:47:38.792815   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt ...
	I1102 12:47:38.792839   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt: {Name:mk025efd1cd72c556595edea83c8f3ac5302e128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.792977   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key ...
	I1102 12:47:38.792987   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key: {Name:mk41b6975b4f28b3c6e2653413a7758d0d49b443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:38.793049   14235 certs.go:257] generating profile certs ...
	I1102 12:47:38.793103   14235 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key
	I1102 12:47:38.793116   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt with IP's: []
	I1102 12:47:39.068199   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt ...
	I1102 12:47:39.068228   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: {Name:mk5652c64a00c1081056c7430e928129af43b585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.068383   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key ...
	I1102 12:47:39.068396   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.key: {Name:mk805628c23d7a2a67e0aca1f815503019fbc6a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.068466   14235 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c
	I1102 12:47:39.068485   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1102 12:47:39.358212   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c ...
	I1102 12:47:39.358242   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c: {Name:mkec8b9bcafff68570ea12859728f4971cf333e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.358398   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c ...
	I1102 12:47:39.358411   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c: {Name:mk5e6983ccda7941e58d81c2f99ae7bf50363979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.358494   14235 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt.b098b56c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt
	I1102 12:47:39.358604   14235 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key.b098b56c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key
	I1102 12:47:39.358661   14235 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key
	I1102 12:47:39.358680   14235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt with IP's: []
	I1102 12:47:39.475727   14235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt ...
	I1102 12:47:39.475764   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt: {Name:mk2bc60461dd6f04b91a711910f304d7c9377359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.475913   14235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key ...
	I1102 12:47:39.475924   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key: {Name:mk42c30c70b7f69d5f760c39a21e7262ba065d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:39.476075   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 12:47:39.476113   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 12:47:39.476132   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 12:47:39.476153   14235 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 12:47:39.476700   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 12:47:39.493858   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 12:47:39.510591   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 12:47:39.527008   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 12:47:39.543052   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 12:47:39.558828   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 12:47:39.574925   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 12:47:39.591468   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 12:47:39.607514   14235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 12:47:39.625332   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 12:47:39.637447   14235 ssh_runner.go:195] Run: openssl version
	I1102 12:47:39.643280   14235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 12:47:39.653602   14235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.657271   14235 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.657358   14235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 12:47:39.691223   14235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 12:47:39.699879   14235 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 12:47:39.703313   14235 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 12:47:39.703369   14235 kubeadm.go:401] StartCluster: {Name:addons-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-341255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:47:39.703431   14235 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:47:39.703477   14235 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:47:39.728822   14235 cri.go:89] found id: ""
	I1102 12:47:39.728896   14235 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 12:47:39.736780   14235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 12:47:39.744195   14235 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 12:47:39.744248   14235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 12:47:39.751322   14235 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 12:47:39.751339   14235 kubeadm.go:158] found existing configuration files:
	
	I1102 12:47:39.751378   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 12:47:39.758748   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 12:47:39.758791   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 12:47:39.765507   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 12:47:39.772405   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 12:47:39.772452   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 12:47:39.779276   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 12:47:39.786437   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 12:47:39.786503   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 12:47:39.793237   14235 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 12:47:39.800058   14235 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 12:47:39.800129   14235 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 12:47:39.807124   14235 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 12:47:39.840703   14235 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 12:47:39.840775   14235 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 12:47:39.860076   14235 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 12:47:39.860168   14235 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 12:47:39.860231   14235 kubeadm.go:319] OS: Linux
	I1102 12:47:39.860342   14235 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 12:47:39.860434   14235 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 12:47:39.860508   14235 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 12:47:39.860595   14235 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 12:47:39.860663   14235 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 12:47:39.860729   14235 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 12:47:39.860797   14235 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 12:47:39.860865   14235 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 12:47:39.927451   14235 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 12:47:39.927638   14235 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 12:47:39.927775   14235 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 12:47:39.936166   14235 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 12:47:39.938837   14235 out.go:252]   - Generating certificates and keys ...
	I1102 12:47:39.938936   14235 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 12:47:39.939037   14235 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 12:47:40.551502   14235 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 12:47:40.616977   14235 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 12:47:40.823531   14235 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 12:47:40.874776   14235 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 12:47:40.910654   14235 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 12:47:40.910777   14235 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 12:47:41.014439   14235 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 12:47:41.014636   14235 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 12:47:41.123901   14235 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 12:47:41.619712   14235 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 12:47:41.774419   14235 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 12:47:41.774515   14235 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 12:47:42.036727   14235 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 12:47:42.129305   14235 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 12:47:42.437928   14235 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 12:47:42.482408   14235 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 12:47:42.547403   14235 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 12:47:42.547996   14235 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 12:47:42.551684   14235 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 12:47:42.553131   14235 out.go:252]   - Booting up control plane ...
	I1102 12:47:42.553216   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 12:47:42.553284   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 12:47:42.553951   14235 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 12:47:42.567382   14235 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 12:47:42.567493   14235 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 12:47:42.573931   14235 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 12:47:42.574186   14235 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 12:47:42.574231   14235 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 12:47:42.670477   14235 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 12:47:42.670666   14235 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 12:47:43.172077   14235 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.755082ms
	I1102 12:47:43.175115   14235 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 12:47:43.175254   14235 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1102 12:47:43.175396   14235 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 12:47:43.175511   14235 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 12:47:45.230820   14235 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.055613744s
	I1102 12:47:45.334789   14235 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.159623843s
	I1102 12:47:46.676913   14235 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50171218s
	I1102 12:47:46.687183   14235 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 12:47:46.696476   14235 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 12:47:46.704332   14235 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 12:47:46.704683   14235 kubeadm.go:319] [mark-control-plane] Marking the node addons-341255 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 12:47:46.712318   14235 kubeadm.go:319] [bootstrap-token] Using token: mcu3nc.y8g41xelym1jkr4a
	I1102 12:47:46.713883   14235 out.go:252]   - Configuring RBAC rules ...
	I1102 12:47:46.713995   14235 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 12:47:46.717088   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 12:47:46.722068   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 12:47:46.724390   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 12:47:46.726629   14235 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 12:47:46.729957   14235 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 12:47:47.082989   14235 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 12:47:47.499441   14235 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 12:47:48.082735   14235 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 12:47:48.083611   14235 kubeadm.go:319] 
	I1102 12:47:48.083677   14235 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 12:47:48.083685   14235 kubeadm.go:319] 
	I1102 12:47:48.083785   14235 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 12:47:48.083800   14235 kubeadm.go:319] 
	I1102 12:47:48.083835   14235 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 12:47:48.083945   14235 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 12:47:48.084030   14235 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 12:47:48.084040   14235 kubeadm.go:319] 
	I1102 12:47:48.084096   14235 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 12:47:48.084105   14235 kubeadm.go:319] 
	I1102 12:47:48.084169   14235 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 12:47:48.084179   14235 kubeadm.go:319] 
	I1102 12:47:48.084266   14235 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 12:47:48.084347   14235 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 12:47:48.084453   14235 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 12:47:48.084468   14235 kubeadm.go:319] 
	I1102 12:47:48.084600   14235 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 12:47:48.084715   14235 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 12:47:48.084725   14235 kubeadm.go:319] 
	I1102 12:47:48.084845   14235 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mcu3nc.y8g41xelym1jkr4a \
	I1102 12:47:48.084956   14235 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 12:47:48.084991   14235 kubeadm.go:319] 	--control-plane 
	I1102 12:47:48.085013   14235 kubeadm.go:319] 
	I1102 12:47:48.085143   14235 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 12:47:48.085154   14235 kubeadm.go:319] 
	I1102 12:47:48.085259   14235 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mcu3nc.y8g41xelym1jkr4a \
	I1102 12:47:48.085383   14235 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 12:47:48.087049   14235 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 12:47:48.087192   14235 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 12:47:48.087217   14235 cni.go:84] Creating CNI manager for ""
	I1102 12:47:48.087227   14235 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:47:48.089685   14235 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 12:47:48.090949   14235 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 12:47:48.095164   14235 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 12:47:48.095180   14235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 12:47:48.107821   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 12:47:48.309800   14235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 12:47:48.309881   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:48.309942   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-341255 minikube.k8s.io/updated_at=2025_11_02T12_47_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=addons-341255 minikube.k8s.io/primary=true
	I1102 12:47:48.319382   14235 ops.go:34] apiserver oom_adj: -16
	I1102 12:47:48.398111   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:48.898272   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:49.398735   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:49.898492   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:50.398651   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:50.898170   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:51.399062   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:51.898409   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:52.398479   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:52.898480   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:53.398249   14235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 12:47:53.463295   14235 kubeadm.go:1114] duration metric: took 5.153486386s to wait for elevateKubeSystemPrivileges
	I1102 12:47:53.463362   14235 kubeadm.go:403] duration metric: took 13.759997374s to StartCluster
	I1102 12:47:53.463386   14235 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:53.463542   14235 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:47:53.464211   14235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:53.464998   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 12:47:53.465061   14235 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 12:47:53.465094   14235 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1102 12:47:53.465246   14235 addons.go:70] Setting yakd=true in profile "addons-341255"
	I1102 12:47:53.465257   14235 addons.go:70] Setting ingress-dns=true in profile "addons-341255"
	I1102 12:47:53.465275   14235 addons.go:239] Setting addon yakd=true in "addons-341255"
	I1102 12:47:53.465276   14235 addons.go:239] Setting addon ingress-dns=true in "addons-341255"
	I1102 12:47:53.465293   14235 addons.go:70] Setting registry-creds=true in profile "addons-341255"
	I1102 12:47:53.465317   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465320   14235 addons.go:239] Setting addon registry-creds=true in "addons-341255"
	I1102 12:47:53.465335   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465353   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465386   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:53.465422   14235 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-341255"
	I1102 12:47:53.465453   14235 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-341255"
	I1102 12:47:53.465476   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465525   14235 addons.go:70] Setting gcp-auth=true in profile "addons-341255"
	I1102 12:47:53.465587   14235 mustload.go:66] Loading cluster: addons-341255
	I1102 12:47:53.465598   14235 addons.go:70] Setting metrics-server=true in profile "addons-341255"
	I1102 12:47:53.465650   14235 addons.go:239] Setting addon metrics-server=true in "addons-341255"
	I1102 12:47:53.465769   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.465804   14235 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:47:53.465902   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465916   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465922   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.465951   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466101   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466121   14235 addons.go:70] Setting inspektor-gadget=true in profile "addons-341255"
	I1102 12:47:53.466135   14235 addons.go:239] Setting addon inspektor-gadget=true in "addons-341255"
	I1102 12:47:53.466157   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466445   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466795   14235 out.go:179] * Verifying Kubernetes components...
	I1102 12:47:53.466514   14235 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-341255"
	I1102 12:47:53.467023   14235 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-341255"
	I1102 12:47:53.467048   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.467507   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466540   14235 addons.go:70] Setting default-storageclass=true in profile "addons-341255"
	I1102 12:47:53.468260   14235 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-341255"
	I1102 12:47:53.466553   14235 addons.go:70] Setting ingress=true in profile "addons-341255"
	I1102 12:47:53.469938   14235 addons.go:239] Setting addon ingress=true in "addons-341255"
	I1102 12:47:53.469992   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.470521   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.470884   14235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 12:47:53.466574   14235 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-341255"
	I1102 12:47:53.471122   14235 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-341255"
	I1102 12:47:53.471151   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.471638   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466584   14235 addons.go:70] Setting registry=true in profile "addons-341255"
	I1102 12:47:53.472755   14235 addons.go:239] Setting addon registry=true in "addons-341255"
	I1102 12:47:53.472787   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466591   14235 addons.go:70] Setting cloud-spanner=true in profile "addons-341255"
	I1102 12:47:53.472855   14235 addons.go:239] Setting addon cloud-spanner=true in "addons-341255"
	I1102 12:47:53.472887   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.473269   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.473324   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466600   14235 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-341255"
	I1102 12:47:53.473929   14235 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-341255"
	I1102 12:47:53.466607   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.466608   14235 addons.go:70] Setting storage-provisioner=true in profile "addons-341255"
	I1102 12:47:53.474178   14235 addons.go:239] Setting addon storage-provisioner=true in "addons-341255"
	I1102 12:47:53.474205   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466624   14235 addons.go:70] Setting volcano=true in profile "addons-341255"
	I1102 12:47:53.474362   14235 addons.go:239] Setting addon volcano=true in "addons-341255"
	I1102 12:47:53.474459   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.466646   14235 addons.go:70] Setting volumesnapshots=true in profile "addons-341255"
	I1102 12:47:53.475626   14235 addons.go:239] Setting addon volumesnapshots=true in "addons-341255"
	I1102 12:47:53.475670   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.482016   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482560   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482638   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.482726   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.484947   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.528159   14235 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1102 12:47:53.531987   14235 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 12:47:53.532027   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1102 12:47:53.532107   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.534620   14235 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1102 12:47:53.537818   14235 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 12:47:53.537857   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1102 12:47:53.537924   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.543747   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.561171   14235 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1102 12:47:53.562600   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1102 12:47:53.563501   14235 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 12:47:53.563525   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1102 12:47:53.563704   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.563937   14235 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1102 12:47:53.565177   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1102 12:47:53.568068   14235 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-341255"
	I1102 12:47:53.568123   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.568548   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.570697   14235 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1102 12:47:53.570721   14235 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1102 12:47:53.570792   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.578647   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1102 12:47:53.578720   14235 out.go:179]   - Using image docker.io/registry:3.0.0
	I1102 12:47:53.578852   14235 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1102 12:47:53.580407   14235 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1102 12:47:53.582140   14235 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1102 12:47:53.582165   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1102 12:47:53.582265   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.582290   14235 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1102 12:47:53.582333   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1102 12:47:53.582373   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1102 12:47:53.582383   14235 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1102 12:47:53.582438   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.582704   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:47:53.583732   14235 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1102 12:47:53.584768   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1102 12:47:53.584905   14235 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1102 12:47:53.584920   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1102 12:47:53.584969   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.585326   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1102 12:47:53.585342   14235 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1102 12:47:53.585406   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.585806   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:47:53.586871   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1102 12:47:53.588600   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1102 12:47:53.589621   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	W1102 12:47:53.589879   14235 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1102 12:47:53.590767   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1102 12:47:53.590909   14235 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 12:47:53.590925   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1102 12:47:53.590979   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.592810   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1102 12:47:53.592946   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1102 12:47:53.593034   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.596279   14235 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 12:47:53.597808   14235 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 12:47:53.597845   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 12:47:53.597904   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.606292   14235 addons.go:239] Setting addon default-storageclass=true in "addons-341255"
	I1102 12:47:53.606403   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:47:53.606846   14235 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1102 12:47:53.606903   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:47:53.608893   14235 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 12:47:53.608990   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1102 12:47:53.609159   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.608948   14235 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1102 12:47:53.609084   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.612282   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1102 12:47:53.612362   14235 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1102 12:47:53.612489   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.622183   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.627001   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.635658   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.640697   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.642258   14235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 12:47:53.658965   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.660375   14235 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1102 12:47:53.664196   14235 out.go:179]   - Using image docker.io/busybox:stable
	I1102 12:47:53.664495   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.664980   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.665424   14235 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 12:47:53.665440   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1102 12:47:53.665501   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.667626   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.670348   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.670803   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.675540   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.679392   14235 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 12:47:53.679409   14235 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 12:47:53.679456   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:47:53.679758   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	W1102 12:47:53.693122   14235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 12:47:53.693345   14235 retry.go:31] will retry after 313.675455ms: ssh: handshake failed: EOF
	I1102 12:47:53.716337   14235 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 12:47:53.716534   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:47:53.722694   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	W1102 12:47:53.723727   14235 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 12:47:53.723755   14235 retry.go:31] will retry after 322.850877ms: ssh: handshake failed: EOF
	I1102 12:47:53.777933   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 12:47:53.808230   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 12:47:53.817989   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 12:47:53.823436   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 12:47:53.826730   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1102 12:47:53.826751   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1102 12:47:53.835803   14235 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1102 12:47:53.835827   14235 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1102 12:47:53.849964   14235 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:53.849992   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1102 12:47:53.852988   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1102 12:47:53.853015   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1102 12:47:53.861927   14235 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1102 12:47:53.861953   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1102 12:47:53.866335   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1102 12:47:53.872773   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1102 12:47:53.872800   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1102 12:47:53.873757   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 12:47:53.882609   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 12:47:53.885222   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1102 12:47:53.895602   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1102 12:47:53.895642   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1102 12:47:53.897627   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1102 12:47:53.897652   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1102 12:47:53.898405   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:53.903669   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1102 12:47:53.903692   14235 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1102 12:47:53.931865   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1102 12:47:53.931897   14235 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1102 12:47:53.933673   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1102 12:47:53.933692   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1102 12:47:53.948149   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1102 12:47:53.948248   14235 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1102 12:47:53.961762   14235 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1102 12:47:53.961785   14235 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1102 12:47:53.981494   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1102 12:47:53.981598   14235 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1102 12:47:53.990228   14235 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 12:47:53.990302   14235 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1102 12:47:54.000508   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1102 12:47:54.000531   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1102 12:47:54.018204   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1102 12:47:54.018316   14235 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1102 12:47:54.042219   14235 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1102 12:47:54.044055   14235 node_ready.go:35] waiting up to 6m0s for node "addons-341255" to be "Ready" ...
	I1102 12:47:54.044760   14235 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1102 12:47:54.044824   14235 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1102 12:47:54.057842   14235 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1102 12:47:54.057868   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1102 12:47:54.074582   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 12:47:54.089200   14235 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 12:47:54.089222   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1102 12:47:54.092349   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1102 12:47:54.092368   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1102 12:47:54.111925   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1102 12:47:54.138560   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1102 12:47:54.138618   14235 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1102 12:47:54.139464   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 12:47:54.188471   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1102 12:47:54.188500   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1102 12:47:54.224244   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 12:47:54.231111   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1102 12:47:54.231188   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1102 12:47:54.246226   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 12:47:54.255585   14235 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 12:47:54.255610   14235 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1102 12:47:54.296748   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 12:47:54.560161   14235 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-341255" context rescaled to 1 replicas
	I1102 12:47:55.098684   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.216035837s)
	I1102 12:47:55.098730   14235 addons.go:480] Verifying addon ingress=true in "addons-341255"
	I1102 12:47:55.098862   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.213599091s)
	I1102 12:47:55.098891   14235 addons.go:480] Verifying addon registry=true in "addons-341255"
	I1102 12:47:55.099042   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.200604214s)
	W1102 12:47:55.099071   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.099093   14235 retry.go:31] will retry after 151.321007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.099127   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.024500033s)
	I1102 12:47:55.099145   14235 addons.go:480] Verifying addon metrics-server=true in "addons-341255"
	I1102 12:47:55.100210   14235 out.go:179] * Verifying registry addon...
	I1102 12:47:55.100210   14235 out.go:179] * Verifying ingress addon...
	I1102 12:47:55.101250   14235 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-341255 service yakd-dashboard -n yakd-dashboard
	
	I1102 12:47:55.103174   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1102 12:47:55.103179   14235 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1102 12:47:55.105582   14235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 12:47:55.105600   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:55.107481   14235 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1102 12:47:55.107498   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:55.250874   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:55.413455   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.273950835s)
	W1102 12:47:55.413506   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 12:47:55.413508   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.189222288s)
	I1102 12:47:55.413529   14235 retry.go:31] will retry after 251.618983ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 12:47:55.413548   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.167291037s)
	I1102 12:47:55.413791   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.116826152s)
	I1102 12:47:55.413814   14235 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-341255"
	I1102 12:47:55.415159   14235 out.go:179] * Verifying csi-hostpath-driver addon...
	I1102 12:47:55.417656   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1102 12:47:55.420473   14235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 12:47:55.420493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:55.605914   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:55.606105   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:55.666024   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1102 12:47:55.856462   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.856547   14235 retry.go:31] will retry after 529.760049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:55.919989   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:56.046431   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:47:56.106274   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:56.106360   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:56.387082   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:56.421043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:56.607090   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:56.607147   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:56.920863   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:57.106951   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:57.106951   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:57.421051   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:57.606912   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:57.607000   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:57.920473   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:58.047093   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:47:58.106431   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:58.106601   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:58.152961   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486895479s)
	I1102 12:47:58.152997   14235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.76588472s)
	W1102 12:47:58.153027   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:58.153046   14235 retry.go:31] will retry after 778.208022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:58.421148   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:58.606768   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:58.606953   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:58.921240   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:47:58.932349   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:47:59.106763   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:59.106912   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:59.420589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:47:59.451546   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:59.451592   14235 retry.go:31] will retry after 1.196848603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:47:59.606706   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:47:59.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:47:59.920755   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:00.106867   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:00.106940   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:00.420634   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:00.547063   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:00.606975   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:00.607154   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:00.649195   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:00.920837   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:01.106337   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:01.106378   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:01.168336   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:01.168370   14235 retry.go:31] will retry after 1.572153218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:01.172231   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1102 12:48:01.172301   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:48:01.188952   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:48:01.292806   14235 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1102 12:48:01.304964   14235 addons.go:239] Setting addon gcp-auth=true in "addons-341255"
	I1102 12:48:01.305017   14235 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:48:01.305360   14235 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:48:01.323723   14235 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1102 12:48:01.323775   14235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:48:01.340837   14235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:48:01.420612   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:01.437339   14235 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 12:48:01.438869   14235 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1102 12:48:01.439790   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1102 12:48:01.439803   14235 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1102 12:48:01.451974   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1102 12:48:01.451995   14235 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1102 12:48:01.463749   14235 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 12:48:01.463768   14235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1102 12:48:01.475701   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 12:48:01.607031   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:01.607086   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:01.771715   14235 addons.go:480] Verifying addon gcp-auth=true in "addons-341255"
	I1102 12:48:01.773280   14235 out.go:179] * Verifying gcp-auth addon...
	I1102 12:48:01.775261   14235 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1102 12:48:01.777392   14235 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1102 12:48:01.777430   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:01.921461   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:02.106397   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:02.106611   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:02.277912   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:02.420774   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:02.547543   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:02.606298   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:02.606590   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:02.741521   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:02.778999   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:02.920750   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:03.106523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:03.106623   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:03.270903   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:03.270944   14235 retry.go:31] will retry after 1.823023277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:03.278343   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:03.421551   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:03.606257   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:03.606352   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:03.777923   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:03.920409   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:04.106656   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:04.106786   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:04.278388   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:04.421612   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:04.606785   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:04.607048   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:04.778122   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:04.920550   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:05.046898   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:05.095073   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:05.106549   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:05.107406   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:05.278345   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:05.421065   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:05.607082   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:05.607185   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:05.617124   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:05.617158   14235 retry.go:31] will retry after 4.122018669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:05.778626   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:05.921240   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:06.106479   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:06.106690   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:06.278167   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:06.420844   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:06.605814   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:06.605832   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:06.778313   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:06.920726   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:07.047487   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:07.106038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:07.106227   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:07.278903   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:07.420637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:07.606600   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:07.606678   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:07.778420   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:07.920771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:08.105915   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:08.106075   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:08.278896   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:08.421337   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:08.606336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:08.606444   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:08.777923   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:08.920613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:09.105818   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:09.106060   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:09.278655   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:09.420506   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:09.546928   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:09.606292   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:09.606387   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:09.739470   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:09.778355   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:09.920801   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:10.106469   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:10.106610   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:48:10.265150   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:10.265182   14235 retry.go:31] will retry after 2.515147563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:10.278694   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:10.420716   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:10.606747   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:10.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:10.778734   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:10.920550   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:11.106616   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:11.106682   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:11.277920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:11.420662   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:11.547076   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:11.606510   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:11.606718   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:11.778103   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:11.920943   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:12.106223   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:12.106425   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:12.278777   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:12.420673   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:12.605948   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:12.606206   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:12.778623   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:12.780704   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:12.921007   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:13.105721   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:13.105880   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:13.278788   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:13.310288   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:13.310319   14235 retry.go:31] will retry after 8.074968626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:13.421298   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:13.606613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:13.606782   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:13.778406   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:13.921100   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:14.046502   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:14.105994   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:14.106077   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:14.278679   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:14.420502   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:14.606113   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:14.606351   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:14.777942   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:14.920374   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:15.106024   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:15.106280   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:15.277841   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:15.420425   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:15.606316   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:15.606401   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:15.777950   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:15.920645   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:16.046899   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:16.106552   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:16.106583   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:16.277918   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:16.420498   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:16.606476   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:16.606671   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:16.778211   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:16.920933   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:17.105662   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:17.105805   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:17.278161   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:17.420823   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:17.605844   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:17.605988   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:17.778270   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:17.921238   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:18.105901   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:18.106169   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:18.278617   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:18.420593   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:18.547245   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:18.605941   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:18.606092   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:18.778681   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:18.920170   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:19.105993   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:19.106072   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:19.278523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:19.421212   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:19.606195   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:19.606404   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:19.778900   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:19.920211   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:20.105900   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:20.105978   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:20.278641   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:20.420121   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:20.547444   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:20.605771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:20.605945   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:20.778613   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:20.921095   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:21.105920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:21.106128   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:21.278807   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:21.385997   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:21.420905   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:21.606551   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:21.606638   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:21.778831   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:21.907610   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:21.907641   14235 retry.go:31] will retry after 10.635923497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:21.921006   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:22.106052   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:22.106273   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:22.278390   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:22.421210   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:22.606014   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:22.606055   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:22.778459   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:22.921129   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:23.047682   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:23.106086   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:23.106295   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:23.277799   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:23.420543   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:23.605887   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:23.606063   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:23.778555   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:23.921287   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:24.106412   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:24.106478   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:24.277747   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:24.420375   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:24.606483   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:24.606604   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:24.777804   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:24.920309   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:25.106336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:25.106542   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:25.278187   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:25.420636   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:25.547335   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:25.605589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:25.605788   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:25.777856   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:25.921927   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:26.105825   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:26.105986   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:26.278523   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:26.421224   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:26.606465   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:26.606682   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:26.778038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:26.920669   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:27.106661   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:27.106746   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:27.278230   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:27.421006   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:27.606989   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:27.608085   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:27.778771   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:27.920333   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:28.046880   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:28.106782   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:28.106984   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:28.278720   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:28.420392   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:28.606561   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:28.606739   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:28.778278   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:28.920913   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:29.105911   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:29.106124   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:29.278735   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:29.420226   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:29.606603   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:29.606794   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:29.778237   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:29.920846   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:30.047164   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:30.106544   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:30.106717   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:30.278604   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:30.421238   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:30.606152   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:30.606389   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:30.777744   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:30.920261   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:31.106391   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:31.106614   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:31.278435   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:31.420937   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:31.606251   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:31.606681   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:31.777851   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:31.920521   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:32.106452   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:32.106668   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:32.278248   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:32.421089   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:32.544263   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1102 12:48:32.547084   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:32.607040   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:32.607115   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:32.778637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:32.921239   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:33.072632   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:33.072662   14235 retry.go:31] will retry after 15.068864638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:33.106223   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:33.106264   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:33.278770   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:33.420299   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:33.606714   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:33.606889   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:33.778624   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:33.921321   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:34.106541   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:34.106686   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:34.278102   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:34.420750   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 12:48:34.547373   14235 node_ready.go:57] node "addons-341255" has "Ready":"False" status (will retry)
	I1102 12:48:34.608996   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:34.609151   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:34.780460   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:34.921676   14235 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 12:48:34.921703   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:35.047666   14235 node_ready.go:49] node "addons-341255" is "Ready"
	I1102 12:48:35.047701   14235 node_ready.go:38] duration metric: took 41.003557274s for node "addons-341255" to be "Ready" ...
	I1102 12:48:35.047720   14235 api_server.go:52] waiting for apiserver process to appear ...
	I1102 12:48:35.047776   14235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 12:48:35.066112   14235 api_server.go:72] duration metric: took 41.601005894s to wait for apiserver process to appear ...
	I1102 12:48:35.066145   14235 api_server.go:88] waiting for apiserver healthz status ...
	I1102 12:48:35.066170   14235 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1102 12:48:35.071526   14235 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1102 12:48:35.072436   14235 api_server.go:141] control plane version: v1.34.1
	I1102 12:48:35.072463   14235 api_server.go:131] duration metric: took 6.309312ms to wait for apiserver health ...
	I1102 12:48:35.072474   14235 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 12:48:35.076254   14235 system_pods.go:59] 20 kube-system pods found
	I1102 12:48:35.076287   14235 system_pods.go:61] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.076296   14235 system_pods.go:61] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.076307   14235 system_pods.go:61] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.076315   14235 system_pods.go:61] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.076323   14235 system_pods.go:61] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.076333   14235 system_pods.go:61] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.076338   14235 system_pods.go:61] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.076348   14235 system_pods.go:61] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.076353   14235 system_pods.go:61] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.076364   14235 system_pods.go:61] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.076371   14235 system_pods.go:61] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.076377   14235 system_pods.go:61] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.076387   14235 system_pods.go:61] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.076397   14235 system_pods.go:61] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.076408   14235 system_pods.go:61] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.076432   14235 system_pods.go:61] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.076441   14235 system_pods.go:61] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.076453   14235 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.076466   14235 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.076473   14235 system_pods.go:61] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 12:48:35.076485   14235 system_pods.go:74] duration metric: took 4.003475ms to wait for pod list to return data ...
	I1102 12:48:35.076494   14235 default_sa.go:34] waiting for default service account to be created ...
	I1102 12:48:35.078954   14235 default_sa.go:45] found service account: "default"
	I1102 12:48:35.078979   14235 default_sa.go:55] duration metric: took 2.478323ms for default service account to be created ...
	I1102 12:48:35.078990   14235 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 12:48:35.175784   14235 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 12:48:35.175810   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:35.176429   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:35.180811   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.180845   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.180855   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.180865   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.180872   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.180880   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.180886   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.180891   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.180896   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.180901   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.180908   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.180913   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.180918   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.180925   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.180934   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.180941   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.180949   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.180956   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.180966   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.180974   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.180981   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 12:48:35.180997   14235 retry.go:31] will retry after 297.366179ms: missing components: kube-dns
	I1102 12:48:35.280371   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:35.422944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:35.484258   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.484422   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.484448   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 12:48:35.484482   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.484508   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.484529   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.484549   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.484671   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.484721   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.484730   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.484740   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.484746   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.484752   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.484760   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.484802   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.484823   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.484842   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.484864   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.484912   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.484933   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.484941   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Running
	I1102 12:48:35.484961   14235 retry.go:31] will retry after 298.621934ms: missing components: kube-dns
	I1102 12:48:35.608600   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:35.608644   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:35.779658   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:35.789685   14235 system_pods.go:86] 20 kube-system pods found
	I1102 12:48:35.789722   14235 system_pods.go:89] "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1102 12:48:35.789730   14235 system_pods.go:89] "coredns-66bc5c9577-pvw29" [75d01053-1137-481f-a631-9589ef68c4bf] Running
	I1102 12:48:35.789740   14235 system_pods.go:89] "csi-hostpath-attacher-0" [b7f892ff-92e7-4b2b-9e17-f07990d022cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 12:48:35.789748   14235 system_pods.go:89] "csi-hostpath-resizer-0" [e596dc1e-310d-4e8b-89a4-13415eb568ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 12:48:35.789765   14235 system_pods.go:89] "csi-hostpathplugin-dj5hr" [5fa980a1-5140-4891-936c-a18f81fc2fa6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 12:48:35.789772   14235 system_pods.go:89] "etcd-addons-341255" [4a9c29c7-8b95-4d9c-9e70-64d828564cf5] Running
	I1102 12:48:35.789780   14235 system_pods.go:89] "kindnet-wsss9" [b026a542-5afb-4529-b49a-15b8f8992e81] Running
	I1102 12:48:35.789785   14235 system_pods.go:89] "kube-apiserver-addons-341255" [db4b7996-6dd7-49f7-a30b-58d912a334d2] Running
	I1102 12:48:35.789790   14235 system_pods.go:89] "kube-controller-manager-addons-341255" [9ba1198b-9297-4650-aa65-1b04a6a5b7aa] Running
	I1102 12:48:35.789797   14235 system_pods.go:89] "kube-ingress-dns-minikube" [64449ffe-7912-4563-8a68-633847fe26ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 12:48:35.789802   14235 system_pods.go:89] "kube-proxy-prdwm" [7aa09fd7-54a4-422e-ad98-0cd851a8ca56] Running
	I1102 12:48:35.789807   14235 system_pods.go:89] "kube-scheduler-addons-341255" [3a057967-fb83-46ef-8203-01a7a5b20df9] Running
	I1102 12:48:35.789815   14235 system_pods.go:89] "metrics-server-85b7d694d7-gxjkw" [06bf62df-163b-4afb-9505-6cc7bdca087f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 12:48:35.789823   14235 system_pods.go:89] "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 12:48:35.789831   14235 system_pods.go:89] "registry-6b586f9694-w59vr" [2ed80c97-9f39-46ad-8c55-323cb5ec9834] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 12:48:35.789841   14235 system_pods.go:89] "registry-creds-764b6fb674-xqr5t" [7e050c27-8e52-47ac-a415-124991eae36a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 12:48:35.789855   14235 system_pods.go:89] "registry-proxy-2rjx9" [95e7e89f-42a8-4527-9c6e-2acb1b50b3fa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 12:48:35.789862   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d8c66" [00151aa7-7190-4a2a-98bf-168f13f3d593] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.789871   14235 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrxfs" [602caacb-0980-4bd9-bd90-612501dafc40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 12:48:35.789877   14235 system_pods.go:89] "storage-provisioner" [f41527b9-0120-4e29-994c-932501c2eb53] Running
	I1102 12:48:35.789886   14235 system_pods.go:126] duration metric: took 710.88901ms to wait for k8s-apps to be running ...
	I1102 12:48:35.789926   14235 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 12:48:35.789999   14235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 12:48:35.842767   14235 system_svc.go:56] duration metric: took 52.861179ms WaitForService to wait for kubelet
	I1102 12:48:35.842848   14235 kubeadm.go:587] duration metric: took 42.377746056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 12:48:35.842873   14235 node_conditions.go:102] verifying NodePressure condition ...
	I1102 12:48:35.846143   14235 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 12:48:35.846175   14235 node_conditions.go:123] node cpu capacity is 8
	I1102 12:48:35.846193   14235 node_conditions.go:105] duration metric: took 3.31399ms to run NodePressure ...
	I1102 12:48:35.846207   14235 start.go:242] waiting for startup goroutines ...
	I1102 12:48:35.921493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:36.106204   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:36.106430   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:36.278066   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:36.421161   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:36.607038   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:36.607099   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:36.778743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:36.920812   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:37.106745   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:37.106809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:37.278333   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:37.421618   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:37.606754   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:37.606799   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:37.778902   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:37.920956   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:38.107063   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:38.107068   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:38.279028   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:38.421848   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:38.607296   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:38.607468   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:38.778184   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:38.921335   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:39.132262   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:39.132270   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:39.279168   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:39.421141   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:39.606885   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:39.606928   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:39.778633   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:39.920959   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:40.106861   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:40.106934   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:40.278890   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:40.421036   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:40.607151   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:40.607327   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:40.779128   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:40.921722   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:41.107033   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:41.107239   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:41.279382   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:41.421575   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:41.606743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:41.606840   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:41.778765   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:41.921650   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:42.106621   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:42.106809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:42.278603   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:42.535808   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:42.646506   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:42.646665   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:42.825039   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:42.927668   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:43.106312   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:43.106317   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:43.280636   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:43.421920   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:43.607772   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:43.607800   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:43.778434   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:43.921783   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:44.106996   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:44.107142   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:44.278737   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:44.420797   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:44.606903   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:44.606938   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:44.778355   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:44.920840   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:45.106378   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:45.106542   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:45.278429   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:45.421463   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:45.606303   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:45.606526   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:45.778866   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:45.922226   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:46.107813   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:46.108043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:46.278126   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:46.421232   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:46.607231   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:46.607274   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:46.778895   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:46.921184   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:47.106847   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:47.106862   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:47.310201   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:47.421556   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:47.606405   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:47.606474   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:47.779084   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:47.921029   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:48.106993   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:48.107056   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:48.142158   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:48:48.278316   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:48.421697   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:48.607689   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:48.607886   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:48.778800   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 12:48:48.795362   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:48.795396   14235 retry.go:31] will retry after 13.784301391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:48:48.921521   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:49.106654   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:49.106666   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:49.278932   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:49.421072   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:49.607088   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:49.607260   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:49.778950   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:49.920936   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:50.106795   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:50.106873   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:50.278699   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:50.420502   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:50.606354   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:50.606357   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:50.778320   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:50.921026   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:51.107044   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:51.107149   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:51.279144   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:51.421276   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:51.609021   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:51.609205   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:51.778890   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:51.921043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:52.106776   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:52.106827   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:52.277945   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:52.421166   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:52.641750   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:52.642323   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:52.790432   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:52.967968   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:53.107297   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:53.107350   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:53.278906   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:53.421773   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:53.607244   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:53.607275   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:53.779015   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:53.921129   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:54.107336   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:54.107442   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:54.277452   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:54.421332   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:54.606221   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:54.606248   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:54.778892   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:54.920772   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:55.106681   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:55.106825   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:55.279287   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:55.422668   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:55.609431   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:55.610069   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:55.779032   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:55.921112   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:56.107000   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:56.107117   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:56.278948   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:56.420606   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:56.629810   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:56.629844   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:56.793764   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:56.920919   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:57.106424   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:57.106456   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:57.278343   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:57.421460   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:57.607043   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:57.607164   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:57.778640   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:57.921324   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:58.106493   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:58.106672   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:58.278637   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:58.421589   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:58.606449   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:58.606524   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:58.777939   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:58.921181   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:59.107277   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 12:48:59.107394   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:59.278094   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:59.421300   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:48:59.606520   14235 kapi.go:107] duration metric: took 1m4.503340629s to wait for kubernetes.io/minikube-addons=registry ...
	I1102 12:48:59.606529   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:48:59.777838   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:48:59.920944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:00.108115   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:00.279325   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:00.422247   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:00.607539   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:00.778291   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:00.921675   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:01.106647   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:01.412933   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:01.420870   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:01.609061   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:01.778942   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:01.921116   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:02.106947   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:02.279416   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:02.421699   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:02.580794   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 12:49:02.606812   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:02.778239   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:02.921175   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:03.106778   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 12:49:03.243619   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:49:03.243646   14235 retry.go:31] will retry after 48.637805342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 12:49:03.278537   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:03.421480   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:03.606429   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:03.777956   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:03.920922   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:04.107039   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:04.278886   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:04.421520   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:04.606146   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:04.779252   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:04.921234   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:05.106999   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:05.278546   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:05.421686   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 12:49:05.606793   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:05.778599   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:05.921484   14235 kapi.go:107] duration metric: took 1m10.503824309s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1102 12:49:06.106131   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:06.278743   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:06.606809   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:06.779944   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:07.106751   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:07.278447   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:07.607111   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:07.779501   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:08.107488   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:08.278365   14235 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 12:49:08.606601   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:08.779243   14235 kapi.go:107] duration metric: took 1m7.003978908s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1102 12:49:08.780712   14235 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-341255 cluster.
	I1102 12:49:08.781825   14235 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1102 12:49:08.782890   14235 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1102 12:49:09.107735   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:09.607884   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:10.106244   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:10.606792   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:11.106455   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:11.606841   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:12.109093   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:12.606630   14235 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 12:49:13.107233   14235 kapi.go:107] duration metric: took 1m18.004062175s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1102 12:49:51.882912   14235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1102 12:49:52.401747   14235 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 12:49:52.401834   14235 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1102 12:49:52.404016   14235 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, yakd, nvidia-device-plugin, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1102 12:49:52.405252   14235 addons.go:515] duration metric: took 1m58.940157614s for enable addons: enabled=[registry-creds amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server yakd nvidia-device-plugin storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1102 12:49:52.405296   14235 start.go:247] waiting for cluster config update ...
	I1102 12:49:52.405315   14235 start.go:256] writing updated cluster config ...
	I1102 12:49:52.405557   14235 ssh_runner.go:195] Run: rm -f paused
	I1102 12:49:52.409334   14235 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 12:49:52.412629   14235 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pvw29" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.416399   14235 pod_ready.go:94] pod "coredns-66bc5c9577-pvw29" is "Ready"
	I1102 12:49:52.416419   14235 pod_ready.go:86] duration metric: took 3.773029ms for pod "coredns-66bc5c9577-pvw29" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.418022   14235 pod_ready.go:83] waiting for pod "etcd-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.421469   14235 pod_ready.go:94] pod "etcd-addons-341255" is "Ready"
	I1102 12:49:52.421493   14235 pod_ready.go:86] duration metric: took 3.451366ms for pod "etcd-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.423166   14235 pod_ready.go:83] waiting for pod "kube-apiserver-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.426368   14235 pod_ready.go:94] pod "kube-apiserver-addons-341255" is "Ready"
	I1102 12:49:52.426387   14235 pod_ready.go:86] duration metric: took 3.202034ms for pod "kube-apiserver-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.428189   14235 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:52.813015   14235 pod_ready.go:94] pod "kube-controller-manager-addons-341255" is "Ready"
	I1102 12:49:52.813042   14235 pod_ready.go:86] duration metric: took 384.835847ms for pod "kube-controller-manager-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.013041   14235 pod_ready.go:83] waiting for pod "kube-proxy-prdwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.412594   14235 pod_ready.go:94] pod "kube-proxy-prdwm" is "Ready"
	I1102 12:49:53.412621   14235 pod_ready.go:86] duration metric: took 399.556047ms for pod "kube-proxy-prdwm" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:53.613189   14235 pod_ready.go:83] waiting for pod "kube-scheduler-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:54.012911   14235 pod_ready.go:94] pod "kube-scheduler-addons-341255" is "Ready"
	I1102 12:49:54.012941   14235 pod_ready.go:86] duration metric: took 399.725275ms for pod "kube-scheduler-addons-341255" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 12:49:54.012952   14235 pod_ready.go:40] duration metric: took 1.60358564s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 12:49:54.056329   14235 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 12:49:54.058276   14235 out.go:179] * Done! kubectl is now configured to use "addons-341255" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 12:49:47 addons-341255 crio[777]: time="2025-11-02T12:49:47.330863521Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 12:49:47 addons-341255 crio[777]: time="2025-11-02T12:49:47.330920698Z" level=info msg="Removed pod sandbox: b3b8302deb5b355e9f560d027d244b448e17bc7bb56628fae2c7e8fc25f188fe" id=93f108bd-fc43-4915-8844-83b5f2b84a9d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.938414209Z" level=info msg="Running pod sandbox: default/busybox/POD" id=03230200-b694-4fa1-aca7-783e2515094f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.938517862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.945286297Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44db3cb59aabc1e80f69554e3a88a15c1dacaa2fbdca3571a23e54321061c86d UID:4bcca581-17a7-4233-ac03-1874944a76d9 NetNS:/var/run/netns/4f81aace-9e54-47d9-85a8-5639dbc93ea8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009234f8}] Aliases:map[]}"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.945328547Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.955820154Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44db3cb59aabc1e80f69554e3a88a15c1dacaa2fbdca3571a23e54321061c86d UID:4bcca581-17a7-4233-ac03-1874944a76d9 NetNS:/var/run/netns/4f81aace-9e54-47d9-85a8-5639dbc93ea8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009234f8}] Aliases:map[]}"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.955947422Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.956769353Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.957466064Z" level=info msg="Ran pod sandbox 44db3cb59aabc1e80f69554e3a88a15c1dacaa2fbdca3571a23e54321061c86d with infra container: default/busybox/POD" id=03230200-b694-4fa1-aca7-783e2515094f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.958613667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc181b90-ce1a-441b-8985-dde0f1ebe2c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.958734771Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cc181b90-ce1a-441b-8985-dde0f1ebe2c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.958766908Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cc181b90-ce1a-441b-8985-dde0f1ebe2c8 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.959280747Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f810eb0-a113-4fb1-89e9-d3d05707c33a name=/runtime.v1.ImageService/PullImage
	Nov 02 12:49:54 addons-341255 crio[777]: time="2025-11-02T12:49:54.960893014Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.315345269Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1f810eb0-a113-4fb1-89e9-d3d05707c33a name=/runtime.v1.ImageService/PullImage
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.315919238Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d2dec04d-62fc-4f65-9f63-e6082ba6a093 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.317284506Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=07129231-eb51-4ea9-af8a-83670273e0ce name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.320489823Z" level=info msg="Creating container: default/busybox/busybox" id=8a54b7b0-c004-42f8-84e3-d38d9f643fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.320632252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.32567558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.326108049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.352759151Z" level=info msg="Created container 32e3229ddec3501c12cf318cbf76bef87f3225e225dfdf0bcf87e828823c6721: default/busybox/busybox" id=8a54b7b0-c004-42f8-84e3-d38d9f643fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.353307495Z" level=info msg="Starting container: 32e3229ddec3501c12cf318cbf76bef87f3225e225dfdf0bcf87e828823c6721" id=c941b896-1fbf-4ad6-aec1-71a1c8351b32 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 12:49:56 addons-341255 crio[777]: time="2025-11-02T12:49:56.355206325Z" level=info msg="Started container" PID=6723 containerID=32e3229ddec3501c12cf318cbf76bef87f3225e225dfdf0bcf87e828823c6721 description=default/busybox/busybox id=c941b896-1fbf-4ad6-aec1-71a1c8351b32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44db3cb59aabc1e80f69554e3a88a15c1dacaa2fbdca3571a23e54321061c86d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	32e3229ddec35       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   44db3cb59aabc       busybox                                     default
	fbe639b2e9dc1       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             52 seconds ago       Running             controller                               0                   b80477cbd65d4       ingress-nginx-controller-675c5ddd98-f7qb7   ingress-nginx
	01ffcd58446f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 55 seconds ago       Running             gcp-auth                                 0                   dfe86d2ee286b       gcp-auth-78565c9fb4-c6tbn                   gcp-auth
	10c8828416e15       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          59 seconds ago       Running             csi-snapshotter                          0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	335dea3014c65       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	d965e3c9f58f1       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	22e0d656997f4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	51708e3b1d7e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            About a minute ago   Running             gadget                                   0                   708bf147a941d       gadget-5bvt2                                gadget
	0f107dadfe187       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	594c6f0eb785a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              About a minute ago   Running             registry-proxy                           0                   2280e43bab659       registry-proxy-2rjx9                        kube-system
	b27bf0b460e8b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   378277074f44c       nvidia-device-plugin-daemonset-5g45d        kube-system
	0e2b30bfc0c00       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   4fa6d53b3e519       amd-gpu-device-plugin-kjxsc                 kube-system
	bb2515742aa6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   85a81da1438ed       snapshot-controller-7d9fbc56b8-lrxfs        kube-system
	80c697c3d1f58       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   3ff4d5e4e9418       csi-hostpathplugin-dj5hr                    kube-system
	b8a160d819000       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   f5cdc54ab01d6       csi-hostpath-resizer-0                      kube-system
	7f356839f6e13       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             About a minute ago   Exited              patch                                    1                   30540d3818a4a       ingress-nginx-admission-patch-28fhs         ingress-nginx
	778f25bd3fb1d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   0d7be58985e4c       ingress-nginx-admission-create-6nwnq        ingress-nginx
	12ae95cb9bed4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   f00e7f7c7f2fb       snapshot-controller-7d9fbc56b8-d8c66        kube-system
	69fcc9180e578       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   53c67a2d10336       csi-hostpath-attacher-0                     kube-system
	5ae01cdfa3f78       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   8cacc2f156fb5       cloud-spanner-emulator-86bd5cbb97-qg8w6     default
	55e4c687c6f1b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   fb9109f719725       local-path-provisioner-648f6765c9-9x2dm     local-path-storage
	450cdc62b3458       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   94e4054affe7d       registry-6b586f9694-w59vr                   kube-system
	9ae7b5a96cea4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   23ebd676dee25       yakd-dashboard-5ff678cb9-plk2f              yakd-dashboard
	2513ea12acbf8       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   7c68ce2f49499       metrics-server-85b7d694d7-gxjkw             kube-system
	41eb7ad7b2799       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   82443591c18eb       kube-ingress-dns-minikube                   kube-system
	589499b7daf04       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   c8fd67887d232       coredns-66bc5c9577-pvw29                    kube-system
	157ed615657fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   3641637f6bf88       storage-provisioner                         kube-system
	b21a6de10950e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   3c0f818d95012       kindnet-wsss9                               kube-system
	8a34986297bdf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   cd40f3f885433       kube-proxy-prdwm                            kube-system
	597a4d36c6b41       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   77289814384a2       kube-scheduler-addons-341255                kube-system
	4bccdcbc84a5c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   bb5efe6569e63       kube-controller-manager-addons-341255       kube-system
	16ef3c3243c97       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   6f9d6af1fdbec       etcd-addons-341255                          kube-system
	566e394627151       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   272e93517f3bd       kube-apiserver-addons-341255                kube-system
	
	
	==> coredns [589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce] <==
	[INFO] 10.244.0.19:48229 - 62034 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002528646s
	[INFO] 10.244.0.19:54963 - 48572 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000096376s
	[INFO] 10.244.0.19:54963 - 48814 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000166491s
	[INFO] 10.244.0.19:44274 - 58513 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070791s
	[INFO] 10.244.0.19:44274 - 58717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000120248s
	[INFO] 10.244.0.19:56382 - 48328 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000053918s
	[INFO] 10.244.0.19:56382 - 47928 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000093256s
	[INFO] 10.244.0.19:34322 - 64685 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105998s
	[INFO] 10.244.0.19:34322 - 64287 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107231s
	[INFO] 10.244.0.22:50852 - 18459 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163827s
	[INFO] 10.244.0.22:39840 - 44330 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027175s
	[INFO] 10.244.0.22:39473 - 20177 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136778s
	[INFO] 10.244.0.22:60675 - 1055 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015751s
	[INFO] 10.244.0.22:42759 - 23332 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110411s
	[INFO] 10.244.0.22:49996 - 32290 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00018469s
	[INFO] 10.244.0.22:53903 - 18157 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005329214s
	[INFO] 10.244.0.22:40663 - 32499 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005825106s
	[INFO] 10.244.0.22:33227 - 7976 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005480227s
	[INFO] 10.244.0.22:43792 - 47265 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007054296s
	[INFO] 10.244.0.22:41718 - 37512 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004654744s
	[INFO] 10.244.0.22:35947 - 1239 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005426991s
	[INFO] 10.244.0.22:50538 - 21845 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00479228s
	[INFO] 10.244.0.22:44237 - 38764 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004829608s
	[INFO] 10.244.0.22:33517 - 4911 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001198121s
	[INFO] 10.244.0.22:47705 - 11986 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00195774s
	
	
	==> describe nodes <==
	Name:               addons-341255
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-341255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=addons-341255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T12_47_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-341255
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-341255"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 12:47:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-341255
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 12:49:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 12:49:18 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 12:49:18 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 12:49:18 +0000   Sun, 02 Nov 2025 12:47:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 12:49:18 +0000   Sun, 02 Nov 2025 12:48:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-341255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                82d5416c-386b-4580-893a-4c29b1676015
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-qg8w6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gadget                      gadget-5bvt2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gcp-auth                    gcp-auth-78565c9fb4-c6tbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-f7qb7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m9s
	  kube-system                 amd-gpu-device-plugin-kjxsc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-pvw29                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m11s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpathplugin-dj5hr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 etcd-addons-341255                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m18s
	  kube-system                 kindnet-wsss9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m11s
	  kube-system                 kube-apiserver-addons-341255                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-addons-341255        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-prdwm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-addons-341255                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 metrics-server-85b7d694d7-gxjkw              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m10s
	  kube-system                 nvidia-device-plugin-daemonset-5g45d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-6b586f9694-w59vr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 registry-creds-764b6fb674-xqr5t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 registry-proxy-2rjx9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-d8c66         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-lrxfs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  local-path-storage          local-path-provisioner-648f6765c9-9x2dm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-plk2f               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m9s   kube-proxy       
	  Normal  Starting                 2m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s  kubelet          Node addons-341255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s  kubelet          Node addons-341255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s  kubelet          Node addons-341255 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m12s  node-controller  Node addons-341255 event: Registered Node addons-341255 in Controller
	  Normal  NodeReady                90s    kubelet          Node addons-341255 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 12:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001637] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087017] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405997] i8042: Warning: Keylock active
	[  +0.010199] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.504900] block sda: the capability attribute has been deprecated.
	[  +0.083631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023935] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.640330] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08] <==
	{"level":"warn","ts":"2025-11-02T12:47:44.422487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.428558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.435313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.441142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.448949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.455113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.461804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.475284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.481208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.489117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:44.537996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:55.911111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:47:55.917220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.257462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.263769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.279576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:22.286600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:48:42.533898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.604566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T12:48:42.533989Z","caller":"traceutil/trace.go:172","msg":"trace[7521513] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"113.719761ms","start":"2025-11-02T12:48:42.420259Z","end":"2025-11-02T12:48:42.533978Z","steps":["trace[7521513] 'range keys from in-memory index tree'  (duration: 113.519179ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:48:42.644735Z","caller":"traceutil/trace.go:172","msg":"trace[1848461068] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"105.176239ms","start":"2025-11-02T12:48:42.539540Z","end":"2025-11-02T12:48:42.644716Z","steps":["trace[1848461068] 'process raft request'  (duration: 101.5453ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:48:52.789125Z","caller":"traceutil/trace.go:172","msg":"trace[1199722892] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"138.812201ms","start":"2025-11-02T12:48:52.650291Z","end":"2025-11-02T12:48:52.789103Z","steps":["trace[1199722892] 'process raft request'  (duration: 54.309757ms)","trace[1199722892] 'compare'  (duration: 84.265544ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-02T12:49:01.411687Z","caller":"traceutil/trace.go:172","msg":"trace[1827374070] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1163; }","duration":"134.668877ms","start":"2025-11-02T12:49:01.277001Z","end":"2025-11-02T12:49:01.411670Z","steps":["trace[1827374070] 'read index received'  (duration: 134.66276ms)","trace[1827374070] 'applied index is now lower than readState.Index'  (duration: 4.633µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-02T12:49:01.411779Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.768253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T12:49:01.411796Z","caller":"traceutil/trace.go:172","msg":"trace[828441304] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1129; }","duration":"134.797973ms","start":"2025-11-02T12:49:01.276993Z","end":"2025-11-02T12:49:01.411791Z","steps":["trace[828441304] 'agreement among raft nodes before linearized reading'  (duration: 134.74384ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T12:49:01.411835Z","caller":"traceutil/trace.go:172","msg":"trace[84704451] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"153.097575ms","start":"2025-11-02T12:49:01.258728Z","end":"2025-11-02T12:49:01.411825Z","steps":["trace[84704451] 'process raft request'  (duration: 152.977123ms)"],"step_count":1}
	
	
	==> gcp-auth [01ffcd58446f19ea5e6ba66051369a5d913a8ce4806cb716c4abe720de2e46c5] <==
	2025/11/02 12:49:08 GCP Auth Webhook started!
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	2025/11/02 12:49:54 Ready to marshal response ...
	2025/11/02 12:49:54 Ready to write response ...
	
	
	==> kernel <==
	 12:50:04 up 32 min,  0 user,  load average: 1.50, 1.14, 0.48
	Linux addons-341255 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3] <==
	E1102 12:48:24.356981       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1102 12:48:24.358183       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1102 12:48:25.857149       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 12:48:25.857177       1 metrics.go:72] Registering metrics
	I1102 12:48:25.857231       1 controller.go:711] "Syncing nftables rules"
	I1102 12:48:34.262958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:48:34.263001       1 main.go:301] handling current node
	I1102 12:48:44.263037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:48:44.263106       1 main.go:301] handling current node
	I1102 12:48:54.262891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:48:54.262921       1 main.go:301] handling current node
	I1102 12:49:04.262753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:04.262800       1 main.go:301] handling current node
	I1102 12:49:14.262378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:14.262434       1 main.go:301] handling current node
	I1102 12:49:24.262974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:24.263012       1 main.go:301] handling current node
	I1102 12:49:34.266642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:34.266691       1 main.go:301] handling current node
	I1102 12:49:44.263878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:44.263911       1 main.go:301] handling current node
	I1102 12:49:54.264105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:49:54.264143       1 main.go:301] handling current node
	I1102 12:50:04.268668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:50:04.268701       1 main.go:301] handling current node
	
	
	==> kube-apiserver [566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355] <==
	W1102 12:48:42.716248       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 12:48:42.716327       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1102 12:48:42.716330       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.165.64:443: connect: connection refused" logger="UnhandledError"
	W1102 12:48:43.717158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 12:48:43.717233       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1102 12:48:43.717159       1 handler_proxy.go:99] no RequestInfo found in the context
	I1102 12:48:43.717247       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1102 12:48:43.717265       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1102 12:48:43.718398       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1102 12:48:45.008494       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1102 12:48:47.723644       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 12:48:47.723757       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1102 12:48:47.723754       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.165.64:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1102 12:48:47.739877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1102 12:50:02.772175       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43096: use of closed network connection
	E1102 12:50:02.921801       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43132: use of closed network connection
	
	
	==> kube-controller-manager [4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94] <==
	I1102 12:47:52.242297       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 12:47:52.242474       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 12:47:52.242911       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 12:47:52.242945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 12:47:52.242961       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 12:47:52.243100       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 12:47:52.244160       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 12:47:52.244163       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 12:47:52.247842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 12:47:52.257608       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 12:47:52.257690       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 12:47:52.257736       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 12:47:52.257747       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 12:47:52.257754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 12:47:52.263664       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-341255" podCIDRs=["10.244.0.0/24"]
	I1102 12:47:52.264613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1102 12:47:54.771374       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1102 12:48:22.251989       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1102 12:48:22.252123       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1102 12:48:22.252159       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1102 12:48:22.270908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1102 12:48:22.273889       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 12:48:22.353027       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 12:48:22.374372       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 12:48:37.249189       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47] <==
	I1102 12:47:53.827218       1 server_linux.go:53] "Using iptables proxy"
	I1102 12:47:54.019048       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 12:47:54.170645       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 12:47:54.171242       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 12:47:54.187734       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 12:47:54.495023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 12:47:54.495103       1 server_linux.go:132] "Using iptables Proxier"
	I1102 12:47:54.590678       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 12:47:54.606393       1 server.go:527] "Version info" version="v1.34.1"
	I1102 12:47:54.606438       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:47:54.609871       1 config.go:200] "Starting service config controller"
	I1102 12:47:54.609943       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 12:47:54.609995       1 config.go:106] "Starting endpoint slice config controller"
	I1102 12:47:54.610023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 12:47:54.610056       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 12:47:54.610080       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 12:47:54.610452       1 config.go:309] "Starting node config controller"
	I1102 12:47:54.610478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 12:47:54.710982       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 12:47:54.712630       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 12:47:54.712648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 12:47:54.712656       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d] <==
	I1102 12:47:45.329814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:47:45.330078       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 12:47:45.330137       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 12:47:45.330961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 12:47:45.331783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 12:47:45.331852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:47:45.331942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 12:47:45.332009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 12:47:45.332124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 12:47:45.332278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 12:47:45.332340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 12:47:45.332393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 12:47:45.332451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 12:47:45.332506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 12:47:45.332519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 12:47:45.332593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 12:47:45.332687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 12:47:45.332720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 12:47:45.332825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 12:47:45.332935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 12:47:45.332998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 12:47:45.333166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 12:47:46.141521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 12:47:46.171597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1102 12:47:46.730609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 12:48:59 addons-341255 kubelet[1283]: I1102 12:48:59.582286    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-2rjx9" podStartSLOduration=1.63383503 podStartE2EDuration="25.58226602s" podCreationTimestamp="2025-11-02 12:48:34 +0000 UTC" firstStartedPulling="2025-11-02 12:48:35.075828069 +0000 UTC m=+47.847782227" lastFinishedPulling="2025-11-02 12:48:59.024259051 +0000 UTC m=+71.796213217" observedRunningTime="2025-11-02 12:48:59.581005479 +0000 UTC m=+72.352959645" watchObservedRunningTime="2025-11-02 12:48:59.58226602 +0000 UTC m=+72.354220187"
	Nov 02 12:49:00 addons-341255 kubelet[1283]: I1102 12:49:00.580549    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2rjx9" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 12:49:02 addons-341255 kubelet[1283]: I1102 12:49:02.607676    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-5bvt2" podStartSLOduration=65.847491692 podStartE2EDuration="1m8.607652398s" podCreationTimestamp="2025-11-02 12:47:54 +0000 UTC" firstStartedPulling="2025-11-02 12:48:59.140724506 +0000 UTC m=+71.912678652" lastFinishedPulling="2025-11-02 12:49:01.9008852 +0000 UTC m=+74.672839358" observedRunningTime="2025-11-02 12:49:02.607225852 +0000 UTC m=+75.379180018" watchObservedRunningTime="2025-11-02 12:49:02.607652398 +0000 UTC m=+75.379606565"
	Nov 02 12:49:04 addons-341255 kubelet[1283]: I1102 12:49:04.372064    1283 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 02 12:49:04 addons-341255 kubelet[1283]: I1102 12:49:04.372107    1283 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 02 12:49:05 addons-341255 kubelet[1283]: I1102 12:49:05.621236    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-dj5hr" podStartSLOduration=1.381438795 podStartE2EDuration="31.621213477s" podCreationTimestamp="2025-11-02 12:48:34 +0000 UTC" firstStartedPulling="2025-11-02 12:48:35.047857967 +0000 UTC m=+47.819812126" lastFinishedPulling="2025-11-02 12:49:05.287632662 +0000 UTC m=+78.059586808" observedRunningTime="2025-11-02 12:49:05.620877852 +0000 UTC m=+78.392832041" watchObservedRunningTime="2025-11-02 12:49:05.621213477 +0000 UTC m=+78.393167646"
	Nov 02 12:49:06 addons-341255 kubelet[1283]: E1102 12:49:06.430668    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 02 12:49:06 addons-341255 kubelet[1283]: E1102 12:49:06.430748    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e050c27-8e52-47ac-a415-124991eae36a-gcr-creds podName:7e050c27-8e52-47ac-a415-124991eae36a nodeName:}" failed. No retries permitted until 2025-11-02 12:49:38.430734 +0000 UTC m=+111.202688145 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/7e050c27-8e52-47ac-a415-124991eae36a-gcr-creds") pod "registry-creds-764b6fb674-xqr5t" (UID: "7e050c27-8e52-47ac-a415-124991eae36a") : secret "registry-creds-gcr" not found
	Nov 02 12:49:07 addons-341255 kubelet[1283]: I1102 12:49:07.321694    1283 scope.go:117] "RemoveContainer" containerID="2cf03f8614a7a3bb972db2c587a4897c2b3ce0567b8c2fd3cf2726cf2adf9ad3"
	Nov 02 12:49:07 addons-341255 kubelet[1283]: I1102 12:49:07.620744    1283 scope.go:117] "RemoveContainer" containerID="2cf03f8614a7a3bb972db2c587a4897c2b3ce0567b8c2fd3cf2726cf2adf9ad3"
	Nov 02 12:49:08 addons-341255 kubelet[1283]: I1102 12:49:08.637855    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-c6tbn" podStartSLOduration=65.935345411 podStartE2EDuration="1m7.637796017s" podCreationTimestamp="2025-11-02 12:48:01 +0000 UTC" firstStartedPulling="2025-11-02 12:49:06.768756425 +0000 UTC m=+79.540710571" lastFinishedPulling="2025-11-02 12:49:08.471207018 +0000 UTC m=+81.243161177" observedRunningTime="2025-11-02 12:49:08.637710356 +0000 UTC m=+81.409664533" watchObservedRunningTime="2025-11-02 12:49:08.637796017 +0000 UTC m=+81.409750183"
	Nov 02 12:49:08 addons-341255 kubelet[1283]: I1102 12:49:08.748223    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4xgm\" (UniqueName: \"kubernetes.io/projected/0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb-kube-api-access-k4xgm\") pod \"0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb\" (UID: \"0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb\") "
	Nov 02 12:49:08 addons-341255 kubelet[1283]: I1102 12:49:08.750907    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb-kube-api-access-k4xgm" (OuterVolumeSpecName: "kube-api-access-k4xgm") pod "0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb" (UID: "0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb"). InnerVolumeSpecName "kube-api-access-k4xgm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 02 12:49:08 addons-341255 kubelet[1283]: I1102 12:49:08.849380    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4xgm\" (UniqueName: \"kubernetes.io/projected/0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb-kube-api-access-k4xgm\") on node \"addons-341255\" DevicePath \"\""
	Nov 02 12:49:09 addons-341255 kubelet[1283]: I1102 12:49:09.634152    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b8302deb5b355e9f560d027d244b448e17bc7bb56628fae2c7e8fc25f188fe"
	Nov 02 12:49:12 addons-341255 kubelet[1283]: I1102 12:49:12.657719    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-f7qb7" podStartSLOduration=72.126322427 podStartE2EDuration="1m17.657699011s" podCreationTimestamp="2025-11-02 12:47:55 +0000 UTC" firstStartedPulling="2025-11-02 12:49:06.774429016 +0000 UTC m=+79.546383175" lastFinishedPulling="2025-11-02 12:49:12.30580561 +0000 UTC m=+85.077759759" observedRunningTime="2025-11-02 12:49:12.656410469 +0000 UTC m=+85.428364648" watchObservedRunningTime="2025-11-02 12:49:12.657699011 +0000 UTC m=+85.429653180"
	Nov 02 12:49:29 addons-341255 kubelet[1283]: I1102 12:49:29.322992    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139ae445-ce4e-41cc-a7f6-3400980e2a6d" path="/var/lib/kubelet/pods/139ae445-ce4e-41cc-a7f6-3400980e2a6d/volumes"
	Nov 02 12:49:38 addons-341255 kubelet[1283]: E1102 12:49:38.481001    1283 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 02 12:49:38 addons-341255 kubelet[1283]: E1102 12:49:38.481123    1283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e050c27-8e52-47ac-a415-124991eae36a-gcr-creds podName:7e050c27-8e52-47ac-a415-124991eae36a nodeName:}" failed. No retries permitted until 2025-11-02 12:50:42.481098287 +0000 UTC m=+175.253052449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/7e050c27-8e52-47ac-a415-124991eae36a-gcr-creds") pod "registry-creds-764b6fb674-xqr5t" (UID: "7e050c27-8e52-47ac-a415-124991eae36a") : secret "registry-creds-gcr" not found
	Nov 02 12:49:39 addons-341255 kubelet[1283]: I1102 12:49:39.323345    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb" path="/var/lib/kubelet/pods/0dcfd84d-4136-457b-8f35-bbd6c9b5a2fb/volumes"
	Nov 02 12:49:47 addons-341255 kubelet[1283]: I1102 12:49:47.306492    1283 scope.go:117] "RemoveContainer" containerID="16d7814bd2764256b7a9632d35c7b9568822fb0ca37c9d2de5cf66690195ed19"
	Nov 02 12:49:47 addons-341255 kubelet[1283]: I1102 12:49:47.314358    1283 scope.go:117] "RemoveContainer" containerID="21eec368cdff65089ac079b2c61b7fd881c796cc6b3414f597867e134adfa805"
	Nov 02 12:49:54 addons-341255 kubelet[1283]: I1102 12:49:54.708059    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bcca581-17a7-4233-ac03-1874944a76d9-gcp-creds\") pod \"busybox\" (UID: \"4bcca581-17a7-4233-ac03-1874944a76d9\") " pod="default/busybox"
	Nov 02 12:49:54 addons-341255 kubelet[1283]: I1102 12:49:54.708113    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78vz\" (UniqueName: \"kubernetes.io/projected/4bcca581-17a7-4233-ac03-1874944a76d9-kube-api-access-t78vz\") pod \"busybox\" (UID: \"4bcca581-17a7-4233-ac03-1874944a76d9\") " pod="default/busybox"
	Nov 02 12:49:56 addons-341255 kubelet[1283]: I1102 12:49:56.815057    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.457281088 podStartE2EDuration="2.815036011s" podCreationTimestamp="2025-11-02 12:49:54 +0000 UTC" firstStartedPulling="2025-11-02 12:49:54.959011999 +0000 UTC m=+127.730966144" lastFinishedPulling="2025-11-02 12:49:56.316766919 +0000 UTC m=+129.088721067" observedRunningTime="2025-11-02 12:49:56.814524166 +0000 UTC m=+129.586478332" watchObservedRunningTime="2025-11-02 12:49:56.815036011 +0000 UTC m=+129.586990177"
	
	
	==> storage-provisioner [157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3] <==
	W1102 12:49:39.545838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:41.548794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:41.553408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:43.556606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:43.572159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:45.575329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:45.578781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:47.581558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:47.584962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:49.587815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:49.591504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:51.594413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:51.598116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:53.601686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:53.605179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:55.608178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:55.611791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:57.614734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:57.618427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:59.621299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:49:59.625189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:50:01.628102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:50:01.632477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:50:03.635633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 12:50:03.639035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-341255 -n addons-341255
helpers_test.go:269: (dbg) Run:  kubectl --context addons-341255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs registry-creds-764b6fb674-xqr5t
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs registry-creds-764b6fb674-xqr5t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs registry-creds-764b6fb674-xqr5t: exit status 1 (58.695638ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6nwnq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-28fhs" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xqr5t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-341255 describe pod ingress-nginx-admission-create-6nwnq ingress-nginx-admission-patch-28fhs registry-creds-764b6fb674-xqr5t: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable headlamp --alsologtostderr -v=1: exit status 11 (248.692956ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:05.488703   23830 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:05.488851   23830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:05.488862   23830 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:05.488868   23830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:05.490324   23830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:05.490666   23830 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:05.490994   23830 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:05.491008   23830 addons.go:607] checking whether the cluster is paused
	I1102 12:50:05.491090   23830 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:05.491107   23830 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:05.491445   23830 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:05.510129   23830 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:05.510199   23830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:05.529973   23830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:05.629122   23830 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:05.629196   23830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:05.658800   23830 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:05.658826   23830 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:05.658849   23830 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:05.658858   23830 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:05.658863   23830 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:05.658868   23830 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:05.658873   23830 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:05.658877   23830 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:05.658883   23830 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:05.658892   23830 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:05.658900   23830 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:05.658904   23830 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:05.658913   23830 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:05.658918   23830 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:05.658926   23830 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:05.658933   23830 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:05.658940   23830 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:05.658946   23830 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:05.658950   23830 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:05.658954   23830 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:05.658971   23830 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:05.658977   23830 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:05.658982   23830 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:05.658989   23830 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:05.658994   23830 cri.go:89] found id: ""
	I1102 12:50:05.659040   23830 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:05.672557   23830 out.go:203] 
	W1102 12:50:05.674024   23830 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:05.674047   23830 out.go:285] * 
	* 
	W1102 12:50:05.677238   23830 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:05.678674   23830 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-qg8w6" [1d4ed186-a9cd-495b-9dcc-d3fe81b52fe3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002665037s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (251.11165ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:15.036286   25216 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:15.036541   25216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:15.036551   25216 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:15.036555   25216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:15.036783   25216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:15.037041   25216 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:15.037379   25216 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:15.037395   25216 addons.go:607] checking whether the cluster is paused
	I1102 12:50:15.037474   25216 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:15.037489   25216 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:15.038042   25216 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:15.056027   25216 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:15.056081   25216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:15.073854   25216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:15.173951   25216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:15.174062   25216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:15.208116   25216 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:15.208143   25216 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:15.208149   25216 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:15.208153   25216 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:15.208157   25216 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:15.208161   25216 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:15.208165   25216 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:15.208169   25216 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:15.208173   25216 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:15.208190   25216 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:15.208195   25216 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:15.208199   25216 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:15.208208   25216 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:15.208217   25216 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:15.208221   25216 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:15.208231   25216 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:15.208238   25216 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:15.208244   25216 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:15.208248   25216 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:15.208252   25216 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:15.208256   25216 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:15.208261   25216 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:15.208265   25216 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:15.208268   25216 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:15.208272   25216 cri.go:89] found id: ""
	I1102 12:50:15.208320   25216 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:15.224202   25216 out.go:203] 
	W1102 12:50:15.225492   25216 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:15.225517   25216 out.go:285] * 
	* 
	W1102 12:50:15.228618   25216 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:15.230027   25216 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-341255 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-341255 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [0c48b218-3ee4-40ab-87b1-65f595ba547b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [0c48b218-3ee4-40ab-87b1-65f595ba547b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [0c48b218-3ee4-40ab-87b1-65f595ba547b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002654869s
addons_test.go:967: (dbg) Run:  kubectl --context addons-341255 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 ssh "cat /opt/local-path-provisioner/pvc-a6c8ab11-af96-4d9e-befc-978d62d9294e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-341255 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-341255 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (251.269614ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:18.850965   25843 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:18.851097   25843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:18.851108   25843 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:18.851112   25843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:18.851701   25843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:18.852142   25843 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:18.852803   25843 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:18.852829   25843 addons.go:607] checking whether the cluster is paused
	I1102 12:50:18.852930   25843 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:18.852947   25843 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:18.853309   25843 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:18.871077   25843 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:18.871141   25843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:18.889242   25843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:18.991372   25843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:18.991450   25843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:19.021250   25843 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:19.021275   25843 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:19.021279   25843 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:19.021282   25843 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:19.021285   25843 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:19.021290   25843 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:19.021302   25843 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:19.021305   25843 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:19.021307   25843 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:19.021313   25843 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:19.021316   25843 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:19.021322   25843 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:19.021327   25843 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:19.021331   25843 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:19.021345   25843 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:19.021365   25843 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:19.021375   25843 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:19.021381   25843 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:19.021385   25843 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:19.021387   25843 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:19.021397   25843 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:19.021402   25843 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:19.021405   25843 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:19.021407   25843 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:19.021409   25843 cri.go:89] found id: ""
	I1102 12:50:19.021449   25843 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:19.037738   25843 out.go:203] 
	W1102 12:50:19.038980   25843 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:19.039008   25843 out.go:285] * 
	* 
	W1102 12:50:19.042377   25843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:19.043656   25843 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5g45d" [cd4170a6-d3bf-437b-8100-33df2a1c3693] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00383604s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.167159ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:08.229266   23912 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:08.229531   23912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:08.229540   23912 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:08.229544   23912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:08.229739   23912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:08.229974   23912 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:08.230310   23912 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:08.230325   23912 addons.go:607] checking whether the cluster is paused
	I1102 12:50:08.230413   23912 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:08.230427   23912 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:08.230825   23912 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:08.248896   23912 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:08.248960   23912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:08.266276   23912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:08.364179   23912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:08.364257   23912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:08.392446   23912 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:08.392466   23912 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:08.392470   23912 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:08.392473   23912 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:08.392477   23912 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:08.392480   23912 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:08.392483   23912 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:08.392485   23912 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:08.392488   23912 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:08.392493   23912 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:08.392497   23912 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:08.392501   23912 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:08.392505   23912 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:08.392508   23912 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:08.392512   23912 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:08.392522   23912 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:08.392531   23912 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:08.392537   23912 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:08.392542   23912 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:08.392545   23912 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:08.392547   23912 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:08.392549   23912 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:08.392552   23912 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:08.392554   23912 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:08.392557   23912 cri.go:89] found id: ""
	I1102 12:50:08.392616   23912 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:08.406388   23912 out.go:203] 
	W1102 12:50:08.407709   23912 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:08.407725   23912 out.go:285] * 
	* 
	W1102 12:50:08.410626   23912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:08.412052   23912 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-plk2f" [43e2ecc5-6c82-45ef-ba74-6212d30da972] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003437419s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable yakd --alsologtostderr -v=1: exit status 11 (251.910577ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:24.106293   26049 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:24.106602   26049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:24.106613   26049 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:24.106617   26049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:24.106800   26049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:24.107061   26049 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:24.107394   26049 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:24.107409   26049 addons.go:607] checking whether the cluster is paused
	I1102 12:50:24.107491   26049 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:24.107506   26049 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:24.107870   26049 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:24.126315   26049 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:24.126373   26049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:24.146969   26049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:24.248471   26049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:24.248555   26049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:24.279785   26049 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:24.279807   26049 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:24.279815   26049 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:24.279818   26049 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:24.279821   26049 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:24.279825   26049 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:24.279827   26049 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:24.279829   26049 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:24.279832   26049 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:24.279837   26049 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:24.279851   26049 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:24.279857   26049 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:24.279859   26049 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:24.279861   26049 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:24.279864   26049 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:24.279875   26049 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:24.279882   26049 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:24.279885   26049 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:24.279888   26049 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:24.279890   26049 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:24.279892   26049 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:24.279895   26049 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:24.279897   26049 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:24.279899   26049 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:24.279902   26049 cri.go:89] found id: ""
	I1102 12:50:24.279940   26049 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:24.294784   26049 out.go:203] 
	W1102 12:50:24.296244   26049 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:24.296266   26049 out.go:285] * 
	* 
	W1102 12:50:24.299241   26049 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:24.300531   26049 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-kjxsc" [f1aff96e-1b05-4f54-8ca1-4dec91ec69de] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003432441s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341255 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341255 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (262.147895ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:50:24.416655   26125 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:50:24.416927   26125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:24.416940   26125 out.go:374] Setting ErrFile to fd 2...
	I1102 12:50:24.416948   26125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:50:24.417230   26125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:50:24.417498   26125 mustload.go:66] Loading cluster: addons-341255
	I1102 12:50:24.417842   26125 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:24.417859   26125 addons.go:607] checking whether the cluster is paused
	I1102 12:50:24.417945   26125 config.go:182] Loaded profile config "addons-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:50:24.417960   26125 host.go:66] Checking if "addons-341255" exists ...
	I1102 12:50:24.418324   26125 cli_runner.go:164] Run: docker container inspect addons-341255 --format={{.State.Status}}
	I1102 12:50:24.437095   26125 ssh_runner.go:195] Run: systemctl --version
	I1102 12:50:24.437150   26125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341255
	I1102 12:50:24.456826   26125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/addons-341255/id_rsa Username:docker}
	I1102 12:50:24.556347   26125 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 12:50:24.556425   26125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 12:50:24.585744   26125 cri.go:89] found id: "10c8828416e1573af0bde681553cc1da3516537ae69ce4869a5a61ebbca7f495"
	I1102 12:50:24.585769   26125 cri.go:89] found id: "335dea3014c6518dc5e80b0f1f41d4b8fb8963c113ef39d0cc7665b0a6bb45d9"
	I1102 12:50:24.585774   26125 cri.go:89] found id: "d965e3c9f58f18b332ee733ecae5d0c3152e0f7bcb9a5abb1a13aa105cfb3b31"
	I1102 12:50:24.585778   26125 cri.go:89] found id: "22e0d656997f498029af85064ad9c77d4cee6286599f05d69ec84fc56904b148"
	I1102 12:50:24.585781   26125 cri.go:89] found id: "0f107dadfe18789dff0b1405a2d4cf58597f2d8815a59db2e18434f20e558921"
	I1102 12:50:24.585784   26125 cri.go:89] found id: "594c6f0eb785a638c2e621294f78c92198a3b92b420925fa5fc61f8d3cbbe572"
	I1102 12:50:24.585787   26125 cri.go:89] found id: "b27bf0b460e8ba8e990430735573a4b653ee34a2bbc9d33388736ea983947e4f"
	I1102 12:50:24.585789   26125 cri.go:89] found id: "0e2b30bfc0c00456a552d972c225495134bd931d4ccb35c3eab1920c05f71e45"
	I1102 12:50:24.585792   26125 cri.go:89] found id: "bb2515742aa6fc12a12a4a9c8221e2fbf82bc4188787e37e11008aceba4312da"
	I1102 12:50:24.585799   26125 cri.go:89] found id: "80c697c3d1f58253ff6d6c2d6c9977d4d6246f024451221f472a74afad278c28"
	I1102 12:50:24.585803   26125 cri.go:89] found id: "b8a160d819000f33922d7da2409624b04a22d836f5c4aa00dbb95ec7de6e54ff"
	I1102 12:50:24.585807   26125 cri.go:89] found id: "12ae95cb9bed431bf9a3616335c9f6c44f7480104e99392668b8add99bea63a3"
	I1102 12:50:24.585811   26125 cri.go:89] found id: "69fcc9180e57848f48b849e91b17826c574f153da5d1f46d935fb284a8110153"
	I1102 12:50:24.585819   26125 cri.go:89] found id: "450cdc62b3458f4cb68f3f126c2f09619567d1b8081a7b8266df4a585379f895"
	I1102 12:50:24.585835   26125 cri.go:89] found id: "2513ea12acbf80a12dd58d41ffc3a338405b333f9c1319520525a58ebc9c2ef2"
	I1102 12:50:24.585848   26125 cri.go:89] found id: "41eb7ad7b27997428a56b6f5f6dadcd4f9a22e8632535deb80bc82c14f808bfe"
	I1102 12:50:24.585854   26125 cri.go:89] found id: "589499b7daf046107d171bcce27aae299ebeb02c68ba9eaad6f0183860d869ce"
	I1102 12:50:24.585859   26125 cri.go:89] found id: "157ed615657fe88b4b043ab314aa2b35b032959be08bc74fb73d5e54829d61a3"
	I1102 12:50:24.585862   26125 cri.go:89] found id: "b21a6de10950e8679f0a01a28c465c6c16d6d934e9239568ad0b1c347325d7c3"
	I1102 12:50:24.585864   26125 cri.go:89] found id: "8a34986297bdff6c43ffad00a07719aaccb2fef53319308ac94fb669b545ba47"
	I1102 12:50:24.585867   26125 cri.go:89] found id: "597a4d36c6b41aab061c486228a406a38e428cda9fe7eed744f5cb9d0f87a50d"
	I1102 12:50:24.585870   26125 cri.go:89] found id: "4bccdcbc84a5c92a30e2c3e3dff6135d6365ddeaa4ac1d9bea7dc1f1d10d3a94"
	I1102 12:50:24.585872   26125 cri.go:89] found id: "16ef3c3243c970a8125c6ee9b461f1c7dc5d5e80de5e88e24edd53eecaaa4f08"
	I1102 12:50:24.585875   26125 cri.go:89] found id: "566e39462715189d1923c7c742826f2214b5f2d0124a80f7ff204e0d5ffa1355"
	I1102 12:50:24.585878   26125 cri.go:89] found id: ""
	I1102 12:50:24.585929   26125 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 12:50:24.600424   26125 out.go:203] 
	W1102 12:50:24.601610   26125 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:50:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 12:50:24.601637   26125 out.go:285] * 
	* 
	W1102 12:50:24.604961   26125 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 12:50:24.606538   26125 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-341255 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-529076 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-529076 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rcvd4" [e20a2a18-941a-4e11-9ac8-235c9c06cc3c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-529076 -n functional-529076
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-02 13:05:46.125555337 +0000 UTC m=+1127.945607406
functional_test.go:1645: (dbg) Run:  kubectl --context functional-529076 describe po hello-node-connect-7d85dfc575-rcvd4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-529076 describe po hello-node-connect-7d85dfc575-rcvd4 -n default:
Name:             hello-node-connect-7d85dfc575-rcvd4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-529076/192.168.49.2
Start Time:       Sun, 02 Nov 2025 12:55:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svn2h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-svn2h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rcvd4 to functional-529076
Normal   Pulling    6m50s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-529076 logs hello-node-connect-7d85dfc575-rcvd4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-529076 logs hello-node-connect-7d85dfc575-rcvd4 -n default: exit status 1 (67.071772ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rcvd4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-529076 logs hello-node-connect-7d85dfc575-rcvd4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-529076 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-rcvd4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-529076/192.168.49.2
Start Time:       Sun, 02 Nov 2025 12:55:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svn2h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-svn2h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rcvd4 to functional-529076
Normal   Pulling    6m50s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m50s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-529076 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-529076 logs -l app=hello-node-connect: exit status 1 (58.831396ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rcvd4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-529076 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-529076 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.151.198
IPs:                      10.97.151.198
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32401/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-529076
helpers_test.go:243: (dbg) docker inspect functional-529076:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053",
	        "Created": "2025-11-02T12:53:49.287696268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T12:53:49.318249526Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053/hostname",
	        "HostsPath": "/var/lib/docker/containers/741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053/hosts",
	        "LogPath": "/var/lib/docker/containers/741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053/741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053-json.log",
	        "Name": "/functional-529076",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-529076:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-529076",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "741ffea3a9cec454d4fbe2c7fd4ea91177606a790aac8bf6f8539d24cac5f053",
	                "LowerDir": "/var/lib/docker/overlay2/235660159ce7a7b7f0ff911c4c00a8bcbfb680a07ecb78bc22f39d1f216e5a82-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/235660159ce7a7b7f0ff911c4c00a8bcbfb680a07ecb78bc22f39d1f216e5a82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/235660159ce7a7b7f0ff911c4c00a8bcbfb680a07ecb78bc22f39d1f216e5a82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/235660159ce7a7b7f0ff911c4c00a8bcbfb680a07ecb78bc22f39d1f216e5a82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-529076",
	                "Source": "/var/lib/docker/volumes/functional-529076/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-529076",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-529076",
	                "name.minikube.sigs.k8s.io": "functional-529076",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e127c8f819f096fd9a1d89e1da46d5731dbfd0d737767ad595e2179e2c4e3162",
	            "SandboxKey": "/var/run/docker/netns/e127c8f819f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-529076": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:26:3b:a1:81:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "644ade8d5169537b1c747f6448b875fc2dc290eb71c846769e0583c948ec6354",
	                    "EndpointID": "6791a6dc1726edc7d5bf16e5a92c8099d4530e6c483c74a15591993ac2b0a954",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-529076",
	                        "741ffea3a9ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-529076 -n functional-529076
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 logs -n 25: (1.274660816s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-529076 ssh sudo cat /etc/ssl/certs/12914.pem                                                                    │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh sudo cat /usr/share/ca-certificates/12914.pem                                                        │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh sudo cat /etc/ssl/certs/129142.pem                                                                   │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh sudo cat /usr/share/ca-certificates/129142.pem                                                       │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ start          │ -p functional-529076 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │                     │
	│ start          │ -p functional-529076 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-529076 --alsologtostderr -v=1                                                             │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ cp             │ functional-529076 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh -n functional-529076 sudo cat /home/docker/cp-test.txt                                               │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ cp             │ functional-529076 cp functional-529076:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1397805226/001/cp-test.txt │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh -n functional-529076 sudo cat /home/docker/cp-test.txt                                               │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ cp             │ functional-529076 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh -n functional-529076 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ image          │ functional-529076 image ls --format short --alsologtostderr                                                                │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ image          │ functional-529076 image ls --format yaml --alsologtostderr                                                                 │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ ssh            │ functional-529076 ssh pgrep buildkitd                                                                                      │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │                     │
	│ image          │ functional-529076 image build -t localhost/my-image:functional-529076 testdata/build --alsologtostderr                     │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ image          │ functional-529076 image ls --format json --alsologtostderr                                                                 │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ image          │ functional-529076 image ls --format table --alsologtostderr                                                                │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ update-context │ functional-529076 update-context --alsologtostderr -v=2                                                                    │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ update-context │ functional-529076 update-context --alsologtostderr -v=2                                                                    │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ update-context │ functional-529076 update-context --alsologtostderr -v=2                                                                    │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	│ image          │ functional-529076 image ls                                                                                                 │ functional-529076 │ jenkins │ v1.37.0 │ 02 Nov 25 12:56 UTC │ 02 Nov 25 12:56 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 12:56:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 12:56:08.651712   51511 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:56:08.652032   51511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:56:08.652048   51511 out.go:374] Setting ErrFile to fd 2...
	I1102 12:56:08.652054   51511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:56:08.652399   51511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:56:08.653040   51511 out.go:368] Setting JSON to false
	I1102 12:56:08.654432   51511 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2321,"bootTime":1762085848,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:56:08.654555   51511 start.go:143] virtualization: kvm guest
	I1102 12:56:08.657052   51511 out.go:179] * [functional-529076] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 12:56:08.662473   51511 notify.go:221] Checking for updates...
	I1102 12:56:08.662878   51511 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 12:56:08.664481   51511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:56:08.665798   51511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:56:08.667308   51511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:56:08.669179   51511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 12:56:08.670833   51511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 12:56:08.672987   51511 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:56:08.673668   51511 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:56:08.706460   51511 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:56:08.706588   51511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:56:08.776528   51511 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-02 12:56:08.762834397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:56:08.776686   51511 docker.go:319] overlay module found
	I1102 12:56:08.778846   51511 out.go:179] * Using the docker driver based on existing profile
	I1102 12:56:08.780442   51511 start.go:309] selected driver: docker
	I1102 12:56:08.780459   51511 start.go:930] validating driver "docker" against &{Name:functional-529076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-529076 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:56:08.780559   51511 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 12:56:08.780695   51511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:56:08.858286   51511 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-02 12:56:08.846005587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:56:08.859279   51511 cni.go:84] Creating CNI manager for ""
	I1102 12:56:08.859380   51511 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:56:08.859447   51511 start.go:353] cluster config:
	{Name:functional-529076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-529076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:56:08.861115   51511 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 02 12:56:13 functional-529076 crio[3689]: time="2025-11-02T12:56:13.122308841Z" level=info msg="Started container" PID=7501 containerID=ed4b1807331f440060956dc08aefdfc6f7d3aa6081650695982f195e7f4bde87 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rdqw8/dashboard-metrics-scraper id=cf57fafe-8270-4146-9d08-932f96e7352a name=/runtime.v1.RuntimeService/StartContainer sandboxID=992c693173b72692191de588c7e28f17191fd51b7b0d211d92c4275366bcef8f
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.080396897Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=7161ec8e-fa64-4750-9f21-6c8af98afb6d name=/runtime.v1.ImageService/PullImage
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.081000943Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3b03aa19-9d15-4453-839a-ce5b8e20c27d name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.082622489Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fe5ad1c0-38d0-4c19-804c-c0eaf6edc701 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.086336369Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6z26s/kubernetes-dashboard" id=6756aadf-7b5f-4602-b8d8-b3468c3225a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.086463882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.090536655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.090756924Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/71b261b4d7d257cf20eb80c8b633bdfb02e11e7daffa847ef856aa2a1323f876/merged/etc/group: no such file or directory"
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.09110763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.121176396Z" level=info msg="Created container bb51f4212c19f85aba7c045816ff45f3e3526559be91c9723beb81d8d913e510: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6z26s/kubernetes-dashboard" id=6756aadf-7b5f-4602-b8d8-b3468c3225a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.121752493Z" level=info msg="Starting container: bb51f4212c19f85aba7c045816ff45f3e3526559be91c9723beb81d8d913e510" id=8c06b696-ca17-4105-bb23-203117dfc43e name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 12:56:16 functional-529076 crio[3689]: time="2025-11-02T12:56:16.123365849Z" level=info msg="Started container" PID=7578 containerID=bb51f4212c19f85aba7c045816ff45f3e3526559be91c9723beb81d8d913e510 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6z26s/kubernetes-dashboard id=8c06b696-ca17-4105-bb23-203117dfc43e name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ce510c2eeb6d834367043181e5d1298ea217b87e17c183b6a3928a7292bedae
	Nov 02 12:56:28 functional-529076 crio[3689]: time="2025-11-02T12:56:28.018946704Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4a9f98eb-0f3c-4aaa-b6bb-589aab544985 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:56:30 functional-529076 crio[3689]: time="2025-11-02T12:56:30.020946147Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=39d4f318-fbe8-4b68-bbd9-15d2d44ee640 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:56:58 functional-529076 crio[3689]: time="2025-11-02T12:56:58.017486394Z" level=info msg="Stopping pod sandbox: d1f1d53785ebd13b2de615616683d018574975b404d00d3783c21823237735d7" id=8140928f-cad1-4ea2-abb4-5d07f584fb2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 12:56:58 functional-529076 crio[3689]: time="2025-11-02T12:56:58.017555084Z" level=info msg="Stopped pod sandbox (already stopped): d1f1d53785ebd13b2de615616683d018574975b404d00d3783c21823237735d7" id=8140928f-cad1-4ea2-abb4-5d07f584fb2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 12:56:58 functional-529076 crio[3689]: time="2025-11-02T12:56:58.017917787Z" level=info msg="Removing pod sandbox: d1f1d53785ebd13b2de615616683d018574975b404d00d3783c21823237735d7" id=c9c0c824-b122-409c-983c-c9b8d549b153 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 12:56:58 functional-529076 crio[3689]: time="2025-11-02T12:56:58.021152745Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 12:56:58 functional-529076 crio[3689]: time="2025-11-02T12:56:58.021206538Z" level=info msg="Removed pod sandbox: d1f1d53785ebd13b2de615616683d018574975b404d00d3783c21823237735d7" id=c9c0c824-b122-409c-983c-c9b8d549b153 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 12:57:19 functional-529076 crio[3689]: time="2025-11-02T12:57:19.018869364Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e4abc678-43bf-4156-b63d-47bf64deb32e name=/runtime.v1.ImageService/PullImage
	Nov 02 12:57:22 functional-529076 crio[3689]: time="2025-11-02T12:57:22.019398678Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=237822b0-fb78-45db-93c2-ef3f32057005 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:58:40 functional-529076 crio[3689]: time="2025-11-02T12:58:40.018508117Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e08672b2-293c-461e-99f1-73c0ae2426e5 name=/runtime.v1.ImageService/PullImage
	Nov 02 12:58:56 functional-529076 crio[3689]: time="2025-11-02T12:58:56.01879315Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1912d957-c7b2-4eba-849d-7bfd2de96e92 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:01:21 functional-529076 crio[3689]: time="2025-11-02T13:01:21.019403767Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32d6aea3-eec4-4121-b290-6f47bc3f8ff6 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:01:38 functional-529076 crio[3689]: time="2025-11-02T13:01:38.019821451Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3a0322de-8bf3-4a84-9e2b-6dd0799655d6 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	bb51f4212c19f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   8ce510c2eeb6d       kubernetes-dashboard-855c9754f9-6z26s        kubernetes-dashboard
	ed4b1807331f4       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   992c693173b72       dashboard-metrics-scraper-77bf4d6c4c-rdqw8   kubernetes-dashboard
	38879a506929d       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   ad95ce1bb7443       mysql-5bb876957f-hd752                       default
	dcbabb0be4a1e       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   843b210e09703       sp-pod                                       default
	dca52e03ee363       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   f9399e0d01939       busybox-mount                                default
	bcd10f37e72b7       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   95ea9c3f064b6       nginx-svc                                    default
	370a2b218170c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   add3bb8a06504       kube-controller-manager-functional-529076    kube-system
	a24a81918fd43       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              2                   76dc69bbd80cf       kube-apiserver-functional-529076             kube-system
	5a16dfa2f7713       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   69ab129cd353c       storage-provisioner                          kube-system
	a244edc38fdf7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Exited              kube-apiserver              1                   76dc69bbd80cf       kube-apiserver-functional-529076             kube-system
	6438a09dba9c9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   add3bb8a06504       kube-controller-manager-functional-529076    kube-system
	272e6698f9577       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   36d6472e05c04       etcd-functional-529076                       kube-system
	a0113b0d1f3c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   0e124d7a5bb00       kube-scheduler-functional-529076             kube-system
	b7d4e2f52893c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   69ab129cd353c       storage-provisioner                          kube-system
	6f185e0a0d687       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   a787006e02821       kube-proxy-c99s7                             kube-system
	132aab31c64b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   3c13832a2661e       kindnet-j98hd                                kube-system
	652f1e5c1e5af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   a9b3a1f531851       coredns-66bc5c9577-mp9ml                     kube-system
	36a1f0a250cd6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   a9b3a1f531851       coredns-66bc5c9577-mp9ml                     kube-system
	1988d93a21fbb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   3c13832a2661e       kindnet-j98hd                                kube-system
	d2fc03517dcdb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   a787006e02821       kube-proxy-c99s7                             kube-system
	823e7defffcf2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   0e124d7a5bb00       kube-scheduler-functional-529076             kube-system
	15dfe36c7fd60       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   36d6472e05c04       etcd-functional-529076                       kube-system
	
	
	==> coredns [36a1f0a250cd6c6dbc8c0b4ee93d6d13d448aa384d43958b21aae6d8d220541f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56553 - 23194 "HINFO IN 3869687529116461582.2248869736340539375. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044818024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [652f1e5c1e5afbb6228b667171a55fe4b219e9e0385a18f5028cf9dfe27aa294] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33691 - 6867 "HINFO IN 1146240108664703447.7137125338980991491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033525902s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=474": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=470": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=474": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-529076
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-529076
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=functional-529076
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T12_54_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 12:54:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-529076
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:05:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:03:18 +0000   Sun, 02 Nov 2025 12:53:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:03:18 +0000   Sun, 02 Nov 2025 12:53:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:03:18 +0000   Sun, 02 Nov 2025 12:53:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:03:18 +0000   Sun, 02 Nov 2025 12:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-529076
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e2bb9ce8-3ab9-491e-acac-f9b2a8364daf
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-s6w9s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-rcvd4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-hd752                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m41s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-mp9ml                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-529076                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-j98hd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-529076              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-529076     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-c99s7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-529076              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-rdqw8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6z26s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-529076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-529076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-529076 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-529076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-529076 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-529076 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-529076 event: Registered Node functional-529076 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-529076 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-529076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-529076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-529076 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-529076 event: Registered Node functional-529076 in Controller
	
	
	==> dmesg <==
	[  +0.083631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023935] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.640330] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 2 12:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.052730] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023920] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +2.047704] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +4.031606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +8.511092] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[ +16.382292] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 12:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	
	
	==> etcd [15dfe36c7fd6098f4efa9641adaf73fceb021f4b430c04d0a92f0af36b5f8451] <==
	{"level":"warn","ts":"2025-11-02T12:54:00.276393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.282132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.294101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.313826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.320222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.326071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:54:00.383318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55624","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T12:54:55.884968Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-02T12:54:55.885387Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-529076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-02T12:54:55.885544Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T12:54:55.889415Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T12:54:55.889468Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T12:54:55.889489Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-02T12:54:55.889557Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-02T12:54:55.889594Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-02T12:54:55.889616Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T12:54:55.889691Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T12:54:55.889717Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-02T12:54:55.889603Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T12:54:55.889737Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T12:54:55.889747Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T12:54:55.891421Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-02T12:54:55.891471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T12:54:55.891503Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-02T12:54:55.891512Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-529076","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [272e6698f9577299a426af363b971c026080529f9d61b8801ad64a624eefcefa] <==
	{"level":"warn","ts":"2025-11-02T12:55:17.667368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.673637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.683735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.688523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.694452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.700897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.707512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.713363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.719872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.725606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.731982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.737956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.749374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.755527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.762136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.773803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.776954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.782682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.788486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:55:17.834710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T12:56:13.411554Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.800325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rdqw8\" limit:1 ","response":"range_response_count:1 size:4398"}
	{"level":"info","ts":"2025-11-02T12:56:13.411671Z","caller":"traceutil/trace.go:172","msg":"trace[943287400] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rdqw8; range_end:; response_count:1; response_revision:844; }","duration":"131.932097ms","start":"2025-11-02T12:56:13.279724Z","end":"2025-11-02T12:56:13.411656Z","steps":["trace[943287400] 'range keys from in-memory index tree'  (duration: 131.664383ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:05:17.403812Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-11-02T13:05:17.423108Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"18.939768ms","hash":2308684312,"current-db-size-bytes":3469312,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-02T13:05:17.423151Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2308684312,"revision":1157,"compact-revision":-1}
	
	
	==> kernel <==
	 13:05:47 up 48 min,  0 user,  load average: 0.06, 0.29, 0.47
	Linux functional-529076 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [132aab31c64b3de10b99412a0b47ef11872fe0f177f57759c45d68cce788d1da] <==
	I1102 13:03:46.445691       1 main.go:301] handling current node
	I1102 13:03:56.442495       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:03:56.442900       1 main.go:301] handling current node
	I1102 13:04:06.448766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:06.448805       1 main.go:301] handling current node
	I1102 13:04:16.449861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:16.449896       1 main.go:301] handling current node
	I1102 13:04:26.443397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:26.443454       1 main.go:301] handling current node
	I1102 13:04:36.449612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:36.449642       1 main.go:301] handling current node
	I1102 13:04:46.442659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:46.442692       1 main.go:301] handling current node
	I1102 13:04:56.445439       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:04:56.445479       1 main.go:301] handling current node
	I1102 13:05:06.448786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:05:06.448818       1 main.go:301] handling current node
	I1102 13:05:16.449809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:05:16.449849       1 main.go:301] handling current node
	I1102 13:05:26.443388       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:05:26.443434       1 main.go:301] handling current node
	I1102 13:05:36.449620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:05:36.449651       1 main.go:301] handling current node
	I1102 13:05:46.441710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:05:46.441768       1 main.go:301] handling current node
	
	
	==> kindnet [1988d93a21fbb01280505b868db4f9d58114c176e6846c4d5a44534485f16750] <==
	I1102 12:54:09.436187       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 12:54:09.436441       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1102 12:54:09.436615       1 main.go:148] setting mtu 1500 for CNI 
	I1102 12:54:09.436634       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 12:54:09.436660       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T12:54:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 12:54:09.637722       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 12:54:09.637773       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 12:54:09.637794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 12:54:09.730708       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 12:54:10.030717       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 12:54:10.030744       1 metrics.go:72] Registering metrics
	I1102 12:54:10.030794       1 controller.go:711] "Syncing nftables rules"
	I1102 12:54:19.638723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:54:19.638803       1 main.go:301] handling current node
	I1102 12:54:29.638316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:54:29.638364       1 main.go:301] handling current node
	I1102 12:54:39.638815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 12:54:39.638864       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a244edc38fdf797bb4c38c1ee4b899aa822c0d44d69bf5072e6d0543f9dd4ad5] <==
	I1102 12:54:59.172094       1 options.go:263] external host was not specified, using 192.168.49.2
	I1102 12:54:59.175367       1 server.go:150] Version: v1.34.1
	I1102 12:54:59.175402       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1102 12:54:59.175709       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [a24a81918fd430e73560f45b7467d6640671b68300ca3c54048c0bebf33ceb7b] <==
	I1102 12:55:18.288444       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 12:55:18.301973       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 12:55:18.524484       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 12:55:19.186379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1102 12:55:19.392878       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1102 12:55:19.393970       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 12:55:19.397742       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 12:55:21.811508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 12:55:23.537434       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 12:55:23.587202       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 12:55:40.264697       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.65.39"}
	I1102 12:55:45.770320       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.18.29"}
	I1102 12:55:45.802749       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.151.198"}
	I1102 12:55:49.766923       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.4.100"}
	E1102 12:55:59.599082       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60460: use of closed network connection
	E1102 12:56:06.454373       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53722: use of closed network connection
	I1102 12:56:06.657967       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.245.79"}
	I1102 12:56:09.897171       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 12:56:09.947578       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 12:56:10.016483       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.177.46"}
	I1102 12:56:10.028620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.221.186"}
	E1102 12:56:18.789442       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59868: use of closed network connection
	E1102 12:56:19.706547       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59888: use of closed network connection
	E1102 12:56:21.476366       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59910: use of closed network connection
	I1102 13:05:18.211361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [370a2b218170c3f404ae7ec02b0a2fe263caaad8a8505070e267676d3bd7bdd6] <==
	I1102 12:55:23.182409       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 12:55:23.182428       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 12:55:23.182522       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 12:55:23.182607       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 12:55:23.182613       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 12:55:23.182618       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 12:55:23.182690       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 12:55:23.182696       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 12:55:23.182797       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 12:55:23.182803       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 12:55:23.182811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-529076"
	I1102 12:55:23.182820       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 12:55:23.182887       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 12:55:23.182943       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 12:55:23.183928       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 12:55:23.185858       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 12:55:23.187020       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 12:55:23.188226       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 12:55:23.207426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1102 12:56:09.948934       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1102 12:56:09.953602       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1102 12:56:09.956583       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1102 12:56:09.958068       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1102 12:56:09.961142       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1102 12:56:09.964998       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [6438a09dba9c98fbb1b1c0e09968f586ba2a6e67043035bdb076950cc71eacf1] <==
	I1102 12:54:59.561239       1 serving.go:386] Generated self-signed cert in-memory
	I1102 12:55:00.154331       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1102 12:55:00.154354       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:55:00.155535       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1102 12:55:00.155589       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1102 12:55:00.155822       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1102 12:55:00.155843       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 12:55:10.157946       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [6f185e0a0d687bd77243f4cdbbf00ea633a82f06b13c051f173e5d4b80b47342] <==
	E1102 12:54:46.176105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-529076&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:54:47.462663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-529076&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:54:49.937685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-529076&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:54:53.306465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-529076&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:55:01.758470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-529076&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1102 12:55:18.975632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 12:55:18.975666       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 12:55:18.975812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 12:55:18.995377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 12:55:18.995441       1 server_linux.go:132] "Using iptables Proxier"
	I1102 12:55:19.000953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 12:55:19.001317       1 server.go:527] "Version info" version="v1.34.1"
	I1102 12:55:19.001346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:55:19.002689       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 12:55:19.002709       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 12:55:19.002735       1 config.go:200] "Starting service config controller"
	I1102 12:55:19.002742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 12:55:19.002754       1 config.go:309] "Starting node config controller"
	I1102 12:55:19.002759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 12:55:19.002767       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 12:55:19.002770       1 config.go:106] "Starting endpoint slice config controller"
	I1102 12:55:19.002789       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 12:55:19.103648       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 12:55:19.103666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 12:55:19.103682       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d2fc03517dcdb438d4d73be35f9b848a9dc7f7143288f19cf1d7b5a19ce14d1e] <==
	I1102 12:54:09.294592       1 server_linux.go:53] "Using iptables proxy"
	I1102 12:54:09.358629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 12:54:09.459053       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 12:54:09.459091       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 12:54:09.459201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 12:54:09.481283       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 12:54:09.481338       1 server_linux.go:132] "Using iptables Proxier"
	I1102 12:54:09.487503       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 12:54:09.487960       1 server.go:527] "Version info" version="v1.34.1"
	I1102 12:54:09.487984       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:54:09.489272       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 12:54:09.489299       1 config.go:106] "Starting endpoint slice config controller"
	I1102 12:54:09.489313       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 12:54:09.489277       1 config.go:200] "Starting service config controller"
	I1102 12:54:09.489331       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 12:54:09.489299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 12:54:09.489454       1 config.go:309] "Starting node config controller"
	I1102 12:54:09.489462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 12:54:09.489474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 12:54:09.589873       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 12:54:09.589926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 12:54:09.589932       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [823e7defffcf210b3f080089359c9d789c33106d0a27cf37233d7aa080bfd814] <==
	E1102 12:54:00.785998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:54:00.786021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 12:54:00.786018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 12:54:00.786213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 12:54:00.786318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 12:54:00.786352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 12:54:00.786353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 12:54:01.718677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 12:54:01.768167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 12:54:01.785648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 12:54:01.799824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 12:54:01.865356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 12:54:01.867748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 12:54:01.918071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 12:54:01.929167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 12:54:01.967235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 12:54:01.983389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 12:54:01.987323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1102 12:54:02.284213       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:54:55.776415       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1102 12:54:55.776428       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:54:55.776493       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1102 12:54:55.776514       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1102 12:54:55.776554       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1102 12:54:55.776596       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a0113b0d1f3c2cd02521f4f03df2fef4ebf50c8dc0345e5100634be51bfa0fff] <==
	I1102 12:54:56.610516       1 serving.go:386] Generated self-signed cert in-memory
	W1102 12:54:57.926994       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 12:54:57.927030       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 12:54:57.927053       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 12:54:57.927064       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 12:54:57.964653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 12:54:57.964676       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 12:54:57.966817       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:54:57.966854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 12:54:57.967206       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 12:54:57.967237       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 12:54:58.067613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1102 12:55:18.209295       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 12:55:18.209430       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 12:55:18.211073       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kubelet <==
	Nov 02 13:03:08 functional-529076 kubelet[4275]: E1102 13:03:08.019025    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:03:18 functional-529076 kubelet[4275]: E1102 13:03:18.019085    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:03:23 functional-529076 kubelet[4275]: E1102 13:03:23.018674    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:03:29 functional-529076 kubelet[4275]: E1102 13:03:29.018827    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:03:36 functional-529076 kubelet[4275]: E1102 13:03:36.018756    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:03:44 functional-529076 kubelet[4275]: E1102 13:03:44.018750    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:03:48 functional-529076 kubelet[4275]: E1102 13:03:48.018656    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:03:58 functional-529076 kubelet[4275]: E1102 13:03:58.018724    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:04:00 functional-529076 kubelet[4275]: E1102 13:04:00.018975    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:04:09 functional-529076 kubelet[4275]: E1102 13:04:09.018392    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:04:15 functional-529076 kubelet[4275]: E1102 13:04:15.018941    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:04:24 functional-529076 kubelet[4275]: E1102 13:04:24.018961    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:04:30 functional-529076 kubelet[4275]: E1102 13:04:30.018888    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:04:35 functional-529076 kubelet[4275]: E1102 13:04:35.018807    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:04:42 functional-529076 kubelet[4275]: E1102 13:04:42.018712    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:04:48 functional-529076 kubelet[4275]: E1102 13:04:48.019335    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:04:54 functional-529076 kubelet[4275]: E1102 13:04:54.018277    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:05:01 functional-529076 kubelet[4275]: E1102 13:05:01.018238    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:05:06 functional-529076 kubelet[4275]: E1102 13:05:06.018980    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:05:12 functional-529076 kubelet[4275]: E1102 13:05:12.018318    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:05:19 functional-529076 kubelet[4275]: E1102 13:05:19.018957    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:05:25 functional-529076 kubelet[4275]: E1102 13:05:25.018179    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:05:31 functional-529076 kubelet[4275]: E1102 13:05:31.018620    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	Nov 02 13:05:40 functional-529076 kubelet[4275]: E1102 13:05:40.018387    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-s6w9s" podUID="11f9df38-4895-44bc-b848-ad4c041d07d6"
	Nov 02 13:05:43 functional-529076 kubelet[4275]: E1102 13:05:43.018311    4275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rcvd4" podUID="e20a2a18-941a-4e11-9ac8-235c9c06cc3c"
	
	
	==> kubernetes-dashboard [bb51f4212c19f85aba7c045816ff45f3e3526559be91c9723beb81d8d913e510] <==
	2025/11/02 12:56:16 Starting overwatch
	2025/11/02 12:56:16 Using namespace: kubernetes-dashboard
	2025/11/02 12:56:16 Using in-cluster config to connect to apiserver
	2025/11/02 12:56:16 Using secret token for csrf signing
	2025/11/02 12:56:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 12:56:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 12:56:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 12:56:16 Generating JWE encryption key
	2025/11/02 12:56:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 12:56:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 12:56:16 Initializing JWE encryption key from synchronized object
	2025/11/02 12:56:16 Creating in-cluster Sidecar client
	2025/11/02 12:56:16 Successful request to sidecar
	2025/11/02 12:56:16 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [5a16dfa2f77139c733607114b2d4141ca82a3040baec849837fd49036af72da3] <==
	W1102 13:05:22.600974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:24.603599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:24.609140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:26.612173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:26.615795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:28.618534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:28.622200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:30.625064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:30.628629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:32.632116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:32.636691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:34.639391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:34.643157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:36.645661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:36.649834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:38.652502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:38.657435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:40.660638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:40.664361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:42.667965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:42.673114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:44.676863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:44.680551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:46.683594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:05:46.689838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b7d4e2f52893cfe6d21ab82fb8d6c8371488d7420d5acb039574f778107d1dc7] <==
	I1102 12:54:46.085280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 12:54:46.086685       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-529076 -n functional-529076
helpers_test.go:269: (dbg) Run:  kubectl --context functional-529076 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-s6w9s hello-node-connect-7d85dfc575-rcvd4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-529076 describe pod busybox-mount hello-node-75c85bcc94-s6w9s hello-node-connect-7d85dfc575-rcvd4
helpers_test.go:290: (dbg) kubectl --context functional-529076 describe pod busybox-mount hello-node-75c85bcc94-s6w9s hello-node-connect-7d85dfc575-rcvd4:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-529076/192.168.49.2
	Start Time:       Sun, 02 Nov 2025 12:55:56 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dca52e03ee363f36db2a3654df0f24cc7f5736cd244d0083ec55a408176f88e0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 02 Nov 2025 12:55:58 +0000
	      Finished:     Sun, 02 Nov 2025 12:55:58 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-28nq4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-28nq4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-529076
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.379s (1.379s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-s6w9s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-529076/192.168.49.2
	Start Time:       Sun, 02 Nov 2025 12:55:49 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6grp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t6grp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-s6w9s to functional-529076
	  Normal   Pulling    7m8s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-rcvd4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-529076/192.168.49.2
	Start Time:       Sun, 02 Nov 2025 12:55:45 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svn2h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-svn2h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rcvd4 to functional-529076
	  Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image load --daemon kicbase/echo-server:functional-529076 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-529076" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image load --daemon kicbase/echo-server:functional-529076 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-529076" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-529076
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image load --daemon kicbase/echo-server:functional-529076 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-529076" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image save kicbase/echo-server:functional-529076 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1102 12:55:49.160719   47292 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:55:49.160892   47292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:55:49.160903   47292 out.go:374] Setting ErrFile to fd 2...
	I1102 12:55:49.160909   47292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:55:49.161138   47292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:55:49.161732   47292 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:55:49.161849   47292 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:55:49.162258   47292 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
	I1102 12:55:49.180158   47292 ssh_runner.go:195] Run: systemctl --version
	I1102 12:55:49.180209   47292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
	I1102 12:55:49.198878   47292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
	I1102 12:55:49.297464   47292 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1102 12:55:49.297525   47292 cache_images.go:255] Failed to load cached images for "functional-529076": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1102 12:55:49.297540   47292 cache_images.go:267] failed pushing to: functional-529076

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-529076
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image save --daemon kicbase/echo-server:functional-529076 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-529076
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-529076: exit status 1 (17.01466ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-529076

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-529076

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-529076 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-529076 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s6w9s" [11f9df38-4895-44bc-b848-ad4c041d07d6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-529076 -n functional-529076
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-02 13:05:50.088619949 +0000 UTC m=+1131.908672014
functional_test.go:1460: (dbg) Run:  kubectl --context functional-529076 describe po hello-node-75c85bcc94-s6w9s -n default
functional_test.go:1460: (dbg) kubectl --context functional-529076 describe po hello-node-75c85bcc94-s6w9s -n default:
Name:             hello-node-75c85bcc94-s6w9s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-529076/192.168.49.2
Start Time:       Sun, 02 Nov 2025 12:55:49 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t6grp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t6grp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-s6w9s to functional-529076
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-529076 logs hello-node-75c85bcc94-s6w9s -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-529076 logs hello-node-75c85bcc94-s6w9s -n default: exit status 1 (59.728773ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-s6w9s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-529076 logs hello-node-75c85bcc94-s6w9s -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 service --namespace=default --https --url hello-node: exit status 115 (530.940579ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30474
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-529076 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 service hello-node --url --format={{.IP}}: exit status 115 (536.683456ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-529076 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 service hello-node --url: exit status 115 (535.843929ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30474
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-529076 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30474
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.06s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-441451 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-441451 --output=json --user=testUser: exit status 80 (2.061345343s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"94f90bf7-9f1b-4db7-9e0f-71d019779c46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-441451 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"31bbf302-9644-4bdd-9a39-be69952c6e52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-02T13:16:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"143ee574-08f0-44d4-bb8d-69947cbee948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-441451 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.06s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-441451 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-441451 --output=json --user=testUser: exit status 80 (1.587662563s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17aba1f1-7a2c-4041-88fa-ff00aac2df61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-441451 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"df9dea3a-cced-4ece-9728-15df9f7ecab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-02T13:16:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"acb4880b-11f7-462d-8756-803fa94888ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-441451 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.59s)

                                                
                                    
x
+
TestPause/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-058363 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-058363 --alsologtostderr -v=5: exit status 80 (2.437468378s)

                                                
                                                
-- stdout --
	* Pausing node pause-058363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:29:36.533899  199203 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:29:36.534838  199203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:36.534853  199203 out.go:374] Setting ErrFile to fd 2...
	I1102 13:29:36.534859  199203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:36.535277  199203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:29:36.535754  199203 out.go:368] Setting JSON to false
	I1102 13:29:36.535891  199203 mustload.go:66] Loading cluster: pause-058363
	I1102 13:29:36.536468  199203 config.go:182] Loaded profile config "pause-058363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:36.537045  199203 cli_runner.go:164] Run: docker container inspect pause-058363 --format={{.State.Status}}
	I1102 13:29:36.567816  199203 host.go:66] Checking if "pause-058363" exists ...
	I1102 13:29:36.568176  199203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:36.635035  199203 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-02 13:29:36.623248626 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:36.635926  199203 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-058363 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:29:36.639200  199203 out.go:179] * Pausing node pause-058363 ... 
	I1102 13:29:36.640313  199203 host.go:66] Checking if "pause-058363" exists ...
	I1102 13:29:36.640559  199203 ssh_runner.go:195] Run: systemctl --version
	I1102 13:29:36.640623  199203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-058363
	I1102 13:29:36.659355  199203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/pause-058363/id_rsa Username:docker}
	I1102 13:29:36.766176  199203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:29:36.780772  199203 pause.go:52] kubelet running: true
	I1102 13:29:36.780829  199203 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:29:36.938901  199203 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:29:36.938996  199203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:29:37.017869  199203 cri.go:89] found id: "ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58"
	I1102 13:29:37.017887  199203 cri.go:89] found id: "7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4"
	I1102 13:29:37.017891  199203 cri.go:89] found id: "5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde"
	I1102 13:29:37.017894  199203 cri.go:89] found id: "1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625"
	I1102 13:29:37.017898  199203 cri.go:89] found id: "1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477"
	I1102 13:29:37.017901  199203 cri.go:89] found id: "0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be"
	I1102 13:29:37.017903  199203 cri.go:89] found id: "b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b"
	I1102 13:29:37.017905  199203 cri.go:89] found id: ""
	I1102 13:29:37.017941  199203 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:29:37.030135  199203 retry.go:31] will retry after 318.253245ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:37Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:29:37.348663  199203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:29:37.361178  199203 pause.go:52] kubelet running: false
	I1102 13:29:37.361225  199203 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:29:37.473531  199203 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:29:37.473618  199203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:29:37.539037  199203 cri.go:89] found id: "ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58"
	I1102 13:29:37.539065  199203 cri.go:89] found id: "7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4"
	I1102 13:29:37.539071  199203 cri.go:89] found id: "5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde"
	I1102 13:29:37.539075  199203 cri.go:89] found id: "1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625"
	I1102 13:29:37.539079  199203 cri.go:89] found id: "1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477"
	I1102 13:29:37.539082  199203 cri.go:89] found id: "0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be"
	I1102 13:29:37.539087  199203 cri.go:89] found id: "b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b"
	I1102 13:29:37.539090  199203 cri.go:89] found id: ""
	I1102 13:29:37.539148  199203 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:29:37.551396  199203 retry.go:31] will retry after 221.697108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:37Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:29:37.773919  199203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:29:37.786620  199203 pause.go:52] kubelet running: false
	I1102 13:29:37.786704  199203 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:29:37.902102  199203 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:29:37.902187  199203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:29:37.967649  199203 cri.go:89] found id: "ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58"
	I1102 13:29:37.967671  199203 cri.go:89] found id: "7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4"
	I1102 13:29:37.967676  199203 cri.go:89] found id: "5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde"
	I1102 13:29:37.967681  199203 cri.go:89] found id: "1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625"
	I1102 13:29:37.967685  199203 cri.go:89] found id: "1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477"
	I1102 13:29:37.967688  199203 cri.go:89] found id: "0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be"
	I1102 13:29:37.967691  199203 cri.go:89] found id: "b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b"
	I1102 13:29:37.967694  199203 cri.go:89] found id: ""
	I1102 13:29:37.967741  199203 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:29:37.979548  199203 retry.go:31] will retry after 448.935699ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:37Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:29:38.429255  199203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:29:38.442635  199203 pause.go:52] kubelet running: false
	I1102 13:29:38.442696  199203 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:29:38.564828  199203 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:29:38.564898  199203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:29:38.632590  199203 cri.go:89] found id: "ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58"
	I1102 13:29:38.632612  199203 cri.go:89] found id: "7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4"
	I1102 13:29:38.632616  199203 cri.go:89] found id: "5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde"
	I1102 13:29:38.632619  199203 cri.go:89] found id: "1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625"
	I1102 13:29:38.632622  199203 cri.go:89] found id: "1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477"
	I1102 13:29:38.632624  199203 cri.go:89] found id: "0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be"
	I1102 13:29:38.632627  199203 cri.go:89] found id: "b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b"
	I1102 13:29:38.632629  199203 cri.go:89] found id: ""
	I1102 13:29:38.632665  199203 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:29:38.884236  199203 out.go:203] 
	W1102 13:29:38.891442  199203 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:29:38.891484  199203 out.go:285] * 
	* 
	W1102 13:29:38.895347  199203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:29:38.900602  199203 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-058363 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-058363
helpers_test.go:243: (dbg) docker inspect pause-058363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853",
	        "Created": "2025-11-02T13:28:52.746063497Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:28:52.799365653Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/hostname",
	        "HostsPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/hosts",
	        "LogPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853-json.log",
	        "Name": "/pause-058363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-058363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-058363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853",
	                "LowerDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-058363",
	                "Source": "/var/lib/docker/volumes/pause-058363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-058363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-058363",
	                "name.minikube.sigs.k8s.io": "pause-058363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d8305ca2dd22aa006dfa6189cebce882678f3cb0cd9f45eda73bab5a0af2422",
	            "SandboxKey": "/var/run/docker/netns/9d8305ca2dd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-058363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:1f:9b:41:d5:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d4fa9956052dd56822e5a32d1ec3f05944e3caba0d017a50ac35c82a56b0508",
	                    "EndpointID": "0f4add9b39a23025a756a66f3d112e731db88166d91671640d7666f6aebb93a1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-058363",
	                        "dba5735002b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-058363 -n pause-058363
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-058363 -n pause-058363: exit status 2 (341.478433ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-058363 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-058363 logs -n 25: (1.07074049s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-744353 --schedule 5m                                                                                      │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 5m                                                                                      │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --cancel-scheduled                                                                                 │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │ 02 Nov 25 13:27 UTC │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                     │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │ 02 Nov 25 13:27 UTC │
	│ delete  │ -p scheduled-stop-744353                                                                                                    │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:28 UTC │
	│ start   │ -p insufficient-storage-449768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-449768 │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │                     │
	│ delete  │ -p insufficient-storage-449768                                                                                              │ insufficient-storage-449768 │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:28 UTC │
	│ start   │ -p cert-expiration-110310 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-110310      │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p offline-crio-063012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-063012         │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p pause-058363 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p force-systemd-env-091295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-091295    │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ delete  │ -p force-systemd-env-091295                                                                                                 │ force-systemd-env-091295    │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p force-systemd-flag-600209 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p pause-058363 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ delete  │ -p offline-crio-063012                                                                                                      │ offline-crio-063012         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p NoKubernetes-784609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-784609         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ start   │ -p NoKubernetes-784609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-784609         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ ssh     │ force-systemd-flag-600209 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ pause   │ -p pause-058363 --alsologtostderr -v=5                                                                                      │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ delete  │ -p force-systemd-flag-600209                                                                                                │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:29:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:29:33.764675  198322 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:29:33.764964  198322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:33.764978  198322 out.go:374] Setting ErrFile to fd 2...
	I1102 13:29:33.764985  198322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:33.765257  198322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:29:33.765840  198322 out.go:368] Setting JSON to false
	I1102 13:29:33.767135  198322 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4326,"bootTime":1762085848,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:29:33.767220  198322 start.go:143] virtualization: kvm guest
	I1102 13:29:33.768576  198322 out.go:179] * [NoKubernetes-784609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:29:33.770042  198322 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:29:33.770056  198322 notify.go:221] Checking for updates...
	I1102 13:29:33.772356  198322 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:29:33.777063  198322 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:29:33.778756  198322 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:29:33.780539  198322 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:29:33.781809  198322 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:29:29.729858  193479 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001591088s
	I1102 13:29:29.732756  193479 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:29:29.732873  193479 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1102 13:29:29.732987  193479 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:29:29.733100  193479 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:29:31.537862  193479 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.804979878s
	I1102 13:29:32.133184  193479 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.40037879s
	I1102 13:29:33.734354  193479 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001457931s
	I1102 13:29:33.748764  193479 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:29:33.761640  193479 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:29:33.772862  193479 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:29:33.773624  193479 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-flag-600209 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:29:33.786189  193479 kubeadm.go:319] [bootstrap-token] Using token: 03gyy9.hjcxref0b7grcvt3
	I1102 13:29:33.783633  198322 config.go:182] Loaded profile config "cert-expiration-110310": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:33.783773  198322 config.go:182] Loaded profile config "force-systemd-flag-600209": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:33.783920  198322 config.go:182] Loaded profile config "pause-058363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:33.784026  198322 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:29:33.813845  198322 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:29:33.813948  198322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:33.874686  198322 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:29:33.863321174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:33.874782  198322 docker.go:319] overlay module found
	I1102 13:29:33.876489  198322 out.go:179] * Using the docker driver based on user configuration
	I1102 13:29:33.877730  198322 start.go:309] selected driver: docker
	I1102 13:29:33.877747  198322 start.go:930] validating driver "docker" against <nil>
	I1102 13:29:33.877761  198322 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:29:33.878327  198322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:33.939956  198322 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:29:33.930224463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:33.940167  198322 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:29:33.940455  198322 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 13:29:33.941748  198322 out.go:179] * Using Docker driver with root privileges
	I1102 13:29:33.942915  198322 cni.go:84] Creating CNI manager for ""
	I1102 13:29:33.942979  198322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:29:33.942991  198322 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:29:33.943050  198322 start.go:353] cluster config:
	{Name:NoKubernetes-784609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-784609 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:29:33.944269  198322 out.go:179] * Starting "NoKubernetes-784609" primary control-plane node in "NoKubernetes-784609" cluster
	I1102 13:29:33.945208  198322 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:29:33.946392  198322 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:29:33.947534  198322 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:29:33.947604  198322 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:29:33.947620  198322 cache.go:59] Caching tarball of preloaded images
	I1102 13:29:33.947691  198322 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:29:33.947718  198322 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:29:33.947733  198322 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:29:33.947830  198322 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/NoKubernetes-784609/config.json ...
	I1102 13:29:33.947850  198322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/NoKubernetes-784609/config.json: {Name:mkccc03cceb1daada787a26f2ce13d487ba9bce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:33.969519  198322 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:29:33.969543  198322 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:29:33.969576  198322 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:29:33.969610  198322 start.go:360] acquireMachinesLock for NoKubernetes-784609: {Name:mk959f5e2796cefb16452b270e806296e590bc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:29:33.969714  198322 start.go:364] duration metric: took 84.467µs to acquireMachinesLock for "NoKubernetes-784609"
	I1102 13:29:33.969742  198322 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-784609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-784609 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:29:33.969838  198322 start.go:125] createHost starting for "" (driver="docker")
	I1102 13:29:33.787389  193479 out.go:252]   - Configuring RBAC rules ...
	I1102 13:29:33.787647  193479 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:29:33.792631  193479 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:29:33.798509  193479 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:29:33.803203  193479 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:29:33.806326  193479 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:29:33.809495  193479 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:29:34.140715  193479 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:29:33.342763  196946 cli_runner.go:164] Run: docker network inspect pause-058363 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:29:33.365356  196946 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:29:33.370464  196946 kubeadm.go:884] updating cluster {Name:pause-058363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-058363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:29:33.370647  196946 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:29:33.370715  196946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:29:33.417831  196946 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:29:33.417859  196946 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:29:33.417928  196946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:29:33.450139  196946 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:29:33.450166  196946 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:29:33.450198  196946 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:29:33.450329  196946 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-058363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-058363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:29:33.450414  196946 ssh_runner.go:195] Run: crio config
	I1102 13:29:33.505447  196946 cni.go:84] Creating CNI manager for ""
	I1102 13:29:33.505470  196946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:29:33.505482  196946 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:29:33.505501  196946 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-058363 NodeName:pause-058363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:29:33.505642  196946 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-058363"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:29:33.505700  196946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:29:33.515005  196946 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:29:33.515069  196946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:29:33.524573  196946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1102 13:29:33.540707  196946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:29:33.556283  196946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1102 13:29:33.572238  196946 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:29:33.577069  196946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:29:33.721634  196946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:29:33.737805  196946 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363 for IP: 192.168.76.2
	I1102 13:29:33.737848  196946 certs.go:195] generating shared ca certs ...
	I1102 13:29:33.737867  196946 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:33.738096  196946 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:29:33.738170  196946 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:29:33.738185  196946 certs.go:257] generating profile certs ...
	I1102 13:29:33.738304  196946 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/client.key
	I1102 13:29:33.738354  196946 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/apiserver.key.17e416bc
	I1102 13:29:33.738392  196946 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/proxy-client.key
	I1102 13:29:33.738522  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:29:33.738580  196946 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:29:33.738595  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:29:33.738625  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:29:33.738655  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:29:33.738683  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:29:33.738745  196946 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:29:33.739511  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:29:33.761911  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:29:33.787267  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:29:33.811768  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:29:33.830730  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 13:29:33.850387  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:29:33.871526  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:29:33.889149  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:29:33.909474  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:29:33.928501  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:29:33.947670  196946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:29:33.967287  196946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:29:33.980058  196946 ssh_runner.go:195] Run: openssl version
	I1102 13:29:33.986272  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:29:33.994846  196946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:29:33.998573  196946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:29:33.998623  196946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:29:34.040775  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:29:34.051473  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:29:34.061123  196946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:29:34.065189  196946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:29:34.065241  196946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:29:34.108158  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:29:34.119089  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:29:34.130403  196946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:29:34.135297  196946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:29:34.135355  196946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:29:34.189859  196946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:29:34.202643  196946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:29:34.208183  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:29:34.246658  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:29:34.286193  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:29:34.333741  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:29:34.387827  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:29:34.438881  196946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:29:34.477115  196946 kubeadm.go:401] StartCluster: {Name:pause-058363 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-058363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:29:34.477285  196946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:29:34.477369  196946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:29:34.513196  196946 cri.go:89] found id: "ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58"
	I1102 13:29:34.513221  196946 cri.go:89] found id: "7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4"
	I1102 13:29:34.513226  196946 cri.go:89] found id: "5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde"
	I1102 13:29:34.513230  196946 cri.go:89] found id: "1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625"
	I1102 13:29:34.513233  196946 cri.go:89] found id: "1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477"
	I1102 13:29:34.513235  196946 cri.go:89] found id: "0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be"
	I1102 13:29:34.513240  196946 cri.go:89] found id: "b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b"
	I1102 13:29:34.513244  196946 cri.go:89] found id: ""
	I1102 13:29:34.513297  196946 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:29:34.529611  196946 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:29:34Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:29:34.529675  196946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:29:34.549048  196946 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:29:34.549068  196946 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:29:34.549115  196946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:29:34.558595  196946 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:29:34.559619  196946 kubeconfig.go:125] found "pause-058363" server: "https://192.168.76.2:8443"
	I1102 13:29:34.560704  196946 kapi.go:59] client config for pause-058363: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/client.key", CAFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1102 13:29:34.561269  196946 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1102 13:29:34.561292  196946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1102 13:29:34.561298  196946 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1102 13:29:34.561303  196946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1102 13:29:34.561309  196946 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1102 13:29:34.561759  196946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:29:34.572215  196946 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:29:34.572253  196946 kubeadm.go:602] duration metric: took 23.17782ms to restartPrimaryControlPlane
	I1102 13:29:34.572263  196946 kubeadm.go:403] duration metric: took 95.159679ms to StartCluster
	I1102 13:29:34.572282  196946 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:34.572383  196946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:29:34.574787  196946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:34.575053  196946 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:29:34.575112  196946 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:29:34.575324  196946 config.go:182] Loaded profile config "pause-058363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:34.577929  196946 out.go:179] * Verifying Kubernetes components...
	I1102 13:29:34.577938  196946 out.go:179] * Enabled addons: 
	I1102 13:29:34.579053  196946 addons.go:515] duration metric: took 3.947538ms for enable addons: enabled=[]
	I1102 13:29:34.579129  196946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:29:34.701593  196946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:29:34.720693  196946 node_ready.go:35] waiting up to 6m0s for node "pause-058363" to be "Ready" ...
	I1102 13:29:34.728655  196946 node_ready.go:49] node "pause-058363" is "Ready"
	I1102 13:29:34.728683  196946 node_ready.go:38] duration metric: took 7.959115ms for node "pause-058363" to be "Ready" ...
	I1102 13:29:34.728696  196946 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:29:34.728745  196946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:29:34.742770  196946 api_server.go:72] duration metric: took 167.6791ms to wait for apiserver process to appear ...
	I1102 13:29:34.742796  196946 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:29:34.742817  196946 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:29:34.748452  196946 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:29:34.749476  196946 api_server.go:141] control plane version: v1.34.1
	I1102 13:29:34.749508  196946 api_server.go:131] duration metric: took 6.705151ms to wait for apiserver health ...
	I1102 13:29:34.749520  196946 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:29:34.752928  196946 system_pods.go:59] 7 kube-system pods found
	I1102 13:29:34.752970  196946 system_pods.go:61] "coredns-66bc5c9577-ksfsc" [60dd1d3c-b924-49d8-8615-ccb815c2cd60] Running
	I1102 13:29:34.752982  196946 system_pods.go:61] "etcd-pause-058363" [ac28d146-0275-4ba6-a2e0-b65ee6815fc6] Running
	I1102 13:29:34.752989  196946 system_pods.go:61] "kindnet-wb6rg" [ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e] Running
	I1102 13:29:34.752995  196946 system_pods.go:61] "kube-apiserver-pause-058363" [ed6236b9-444d-4c77-abad-d35d7fe0c8b6] Running
	I1102 13:29:34.753004  196946 system_pods.go:61] "kube-controller-manager-pause-058363" [33972024-3067-4a9e-b8c5-58774b1c5ff2] Running
	I1102 13:29:34.753010  196946 system_pods.go:61] "kube-proxy-52gzz" [fd150b0f-f23c-4501-b9b2-6f7419f54c19] Running
	I1102 13:29:34.753019  196946 system_pods.go:61] "kube-scheduler-pause-058363" [f90ebfd7-2987-4616-8382-7af40c534602] Running
	I1102 13:29:34.753027  196946 system_pods.go:74] duration metric: took 3.499755ms to wait for pod list to return data ...
	I1102 13:29:34.753042  196946 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:29:34.755056  196946 default_sa.go:45] found service account: "default"
	I1102 13:29:34.755076  196946 default_sa.go:55] duration metric: took 2.027073ms for default service account to be created ...
	I1102 13:29:34.755085  196946 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:29:34.757762  196946 system_pods.go:86] 7 kube-system pods found
	I1102 13:29:34.757787  196946 system_pods.go:89] "coredns-66bc5c9577-ksfsc" [60dd1d3c-b924-49d8-8615-ccb815c2cd60] Running
	I1102 13:29:34.757794  196946 system_pods.go:89] "etcd-pause-058363" [ac28d146-0275-4ba6-a2e0-b65ee6815fc6] Running
	I1102 13:29:34.757799  196946 system_pods.go:89] "kindnet-wb6rg" [ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e] Running
	I1102 13:29:34.757804  196946 system_pods.go:89] "kube-apiserver-pause-058363" [ed6236b9-444d-4c77-abad-d35d7fe0c8b6] Running
	I1102 13:29:34.757809  196946 system_pods.go:89] "kube-controller-manager-pause-058363" [33972024-3067-4a9e-b8c5-58774b1c5ff2] Running
	I1102 13:29:34.757814  196946 system_pods.go:89] "kube-proxy-52gzz" [fd150b0f-f23c-4501-b9b2-6f7419f54c19] Running
	I1102 13:29:34.757818  196946 system_pods.go:89] "kube-scheduler-pause-058363" [f90ebfd7-2987-4616-8382-7af40c534602] Running
	I1102 13:29:34.757834  196946 system_pods.go:126] duration metric: took 2.741685ms to wait for k8s-apps to be running ...
	I1102 13:29:34.757841  196946 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:29:34.757886  196946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:29:34.773551  196946 system_svc.go:56] duration metric: took 15.698899ms WaitForService to wait for kubelet
	I1102 13:29:34.773620  196946 kubeadm.go:587] duration metric: took 198.532227ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:29:34.773644  196946 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:29:34.776605  196946 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:29:34.776636  196946 node_conditions.go:123] node cpu capacity is 8
	I1102 13:29:34.776651  196946 node_conditions.go:105] duration metric: took 3.000397ms to run NodePressure ...
	I1102 13:29:34.776666  196946 start.go:242] waiting for startup goroutines ...
	I1102 13:29:34.776677  196946 start.go:247] waiting for cluster config update ...
	I1102 13:29:34.776689  196946 start.go:256] writing updated cluster config ...
	I1102 13:29:34.777002  196946 ssh_runner.go:195] Run: rm -f paused
	I1102 13:29:34.781065  196946 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:29:34.782075  196946 kapi.go:59] client config for pause-058363: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/pause-058363/client.key", CAFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1102 13:29:34.784905  196946 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ksfsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.789096  196946 pod_ready.go:94] pod "coredns-66bc5c9577-ksfsc" is "Ready"
	I1102 13:29:34.789121  196946 pod_ready.go:86] duration metric: took 4.191682ms for pod "coredns-66bc5c9577-ksfsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.790882  196946 pod_ready.go:83] waiting for pod "etcd-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.794658  196946 pod_ready.go:94] pod "etcd-pause-058363" is "Ready"
	I1102 13:29:34.794680  196946 pod_ready.go:86] duration metric: took 3.78046ms for pod "etcd-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.796649  196946 pod_ready.go:83] waiting for pod "kube-apiserver-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.800145  196946 pod_ready.go:94] pod "kube-apiserver-pause-058363" is "Ready"
	I1102 13:29:34.800164  196946 pod_ready.go:86] duration metric: took 3.493938ms for pod "kube-apiserver-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.801812  196946 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:34.555873  193479 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:29:35.140074  193479 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:29:35.141169  193479 kubeadm.go:319] 
	I1102 13:29:35.141275  193479 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:29:35.141285  193479 kubeadm.go:319] 
	I1102 13:29:35.141387  193479 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:29:35.141420  193479 kubeadm.go:319] 
	I1102 13:29:35.141467  193479 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:29:35.141576  193479 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:29:35.141657  193479 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:29:35.141668  193479 kubeadm.go:319] 
	I1102 13:29:35.141743  193479 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:29:35.141752  193479 kubeadm.go:319] 
	I1102 13:29:35.141819  193479 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:29:35.141828  193479 kubeadm.go:319] 
	I1102 13:29:35.141904  193479 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:29:35.142041  193479 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:29:35.142164  193479 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:29:35.142187  193479 kubeadm.go:319] 
	I1102 13:29:35.142305  193479 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:29:35.142442  193479 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:29:35.142455  193479 kubeadm.go:319] 
	I1102 13:29:35.142591  193479 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 03gyy9.hjcxref0b7grcvt3 \
	I1102 13:29:35.142738  193479 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:29:35.142772  193479 kubeadm.go:319] 	--control-plane 
	I1102 13:29:35.142782  193479 kubeadm.go:319] 
	I1102 13:29:35.142929  193479 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:29:35.142946  193479 kubeadm.go:319] 
	I1102 13:29:35.143068  193479 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 03gyy9.hjcxref0b7grcvt3 \
	I1102 13:29:35.143213  193479 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:29:35.145960  193479 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:29:35.146107  193479 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:29:35.146138  193479 cni.go:84] Creating CNI manager for ""
	I1102 13:29:35.146151  193479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:29:35.148793  193479 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:29:35.150089  193479 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:29:35.154526  193479 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:29:35.154547  193479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:29:35.170016  193479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:29:35.412184  193479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:29:35.412389  193479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:29:35.412432  193479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-600209 minikube.k8s.io/updated_at=2025_11_02T13_29_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=force-systemd-flag-600209 minikube.k8s.io/primary=true
	I1102 13:29:35.423887  193479 ops.go:34] apiserver oom_adj: -16
	I1102 13:29:35.506378  193479 kubeadm.go:1114] duration metric: took 94.049888ms to wait for elevateKubeSystemPrivileges
	I1102 13:29:35.506417  193479 kubeadm.go:403] duration metric: took 11.339227127s to StartCluster
	I1102 13:29:35.506439  193479 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:35.506506  193479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:29:35.507676  193479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:35.507885  193479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:29:35.507898  193479 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:29:35.507967  193479 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:29:35.508075  193479 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-flag-600209"
	I1102 13:29:35.508095  193479 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-flag-600209"
	I1102 13:29:35.508102  193479 config.go:182] Loaded profile config "force-systemd-flag-600209": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:35.508114  193479 addons.go:70] Setting default-storageclass=true in profile "force-systemd-flag-600209"
	I1102 13:29:35.508153  193479 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-600209"
	I1102 13:29:35.508137  193479 host.go:66] Checking if "force-systemd-flag-600209" exists ...
	I1102 13:29:35.508517  193479 cli_runner.go:164] Run: docker container inspect force-systemd-flag-600209 --format={{.State.Status}}
	I1102 13:29:35.508712  193479 cli_runner.go:164] Run: docker container inspect force-systemd-flag-600209 --format={{.State.Status}}
	I1102 13:29:35.511680  193479 out.go:179] * Verifying Kubernetes components...
	I1102 13:29:35.512977  193479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:29:35.533389  193479 kapi.go:59] client config for force-systemd-flag-600209: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.key", CAFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1102 13:29:35.533620  193479 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:29:35.534034  193479 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1102 13:29:35.534059  193479 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1102 13:29:35.534067  193479 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1102 13:29:35.534074  193479 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1102 13:29:35.534089  193479 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1102 13:29:35.534095  193479 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1102 13:29:35.534446  193479 addons.go:239] Setting addon default-storageclass=true in "force-systemd-flag-600209"
	I1102 13:29:35.534486  193479 host.go:66] Checking if "force-systemd-flag-600209" exists ...
	I1102 13:29:35.534710  193479 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:29:35.534726  193479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:29:35.534782  193479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-600209
	I1102 13:29:35.534982  193479 cli_runner.go:164] Run: docker container inspect force-systemd-flag-600209 --format={{.State.Status}}
	I1102 13:29:35.565102  193479 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:29:35.565125  193479 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:29:35.565184  193479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-600209
	I1102 13:29:35.567899  193479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32995 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/force-systemd-flag-600209/id_rsa Username:docker}
	I1102 13:29:35.593014  193479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32995 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/force-systemd-flag-600209/id_rsa Username:docker}
	I1102 13:29:35.602961  193479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:29:35.660599  193479 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:29:35.691308  193479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:29:35.710640  193479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:29:35.784985  193479 start.go:1013] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1102 13:29:35.785479  193479 kapi.go:59] client config for force-systemd-flag-600209: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.key", CAFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1102 13:29:35.785879  193479 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:29:35.785938  193479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:29:35.785925  193479 kapi.go:59] client config for force-systemd-flag-600209: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/profiles/force-systemd-flag-600209/client.key", CAFile:"/home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1102 13:29:35.982434  193479 api_server.go:72] duration metric: took 474.505345ms to wait for apiserver process to appear ...
	I1102 13:29:35.982458  193479 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:29:35.982474  193479 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1102 13:29:35.988123  193479 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1102 13:29:35.988920  193479 api_server.go:141] control plane version: v1.34.1
	I1102 13:29:35.988944  193479 api_server.go:131] duration metric: took 6.479462ms to wait for apiserver health ...
	I1102 13:29:35.988952  193479 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:29:35.989147  193479 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:29:35.990352  193479 addons.go:515] duration metric: took 482.400472ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:29:35.991478  193479 system_pods.go:59] 5 kube-system pods found
	I1102 13:29:35.991516  193479 system_pods.go:61] "etcd-force-systemd-flag-600209" [68d3b44d-73da-420e-b8d3-b1d8c05993a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:29:35.991530  193479 system_pods.go:61] "kube-apiserver-force-systemd-flag-600209" [072b7f35-1347-4250-9f94-f22f85673257] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:29:35.991549  193479 system_pods.go:61] "kube-controller-manager-force-systemd-flag-600209" [c0d09917-f222-439b-90a4-92936939451c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:29:35.991592  193479 system_pods.go:61] "kube-scheduler-force-systemd-flag-600209" [99abe996-1351-400f-8b55-6a709baa0d7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:29:35.991604  193479 system_pods.go:61] "storage-provisioner" [8907c6e5-b8ec-45a4-a051-c96ebaa9c286] Pending
	I1102 13:29:35.991612  193479 system_pods.go:74] duration metric: took 2.653999ms to wait for pod list to return data ...
	I1102 13:29:35.991627  193479 kubeadm.go:587] duration metric: took 483.701966ms to wait for: map[apiserver:true system_pods:true]
	I1102 13:29:35.991646  193479 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:29:35.993662  193479 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:29:35.993680  193479 node_conditions.go:123] node cpu capacity is 8
	I1102 13:29:35.993692  193479 node_conditions.go:105] duration metric: took 2.040674ms to run NodePressure ...
	I1102 13:29:35.993705  193479 start.go:242] waiting for startup goroutines ...
	I1102 13:29:36.289208  193479 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-600209" context rescaled to 1 replicas
	I1102 13:29:36.289244  193479 start.go:247] waiting for cluster config update ...
	I1102 13:29:36.289262  193479 start.go:256] writing updated cluster config ...
	I1102 13:29:36.289580  193479 ssh_runner.go:195] Run: rm -f paused
	I1102 13:29:36.346864  193479 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:29:36.348421  193479 out.go:179] * Done! kubectl is now configured to use "force-systemd-flag-600209" cluster and "default" namespace by default
	I1102 13:29:35.185928  196946 pod_ready.go:94] pod "kube-controller-manager-pause-058363" is "Ready"
	I1102 13:29:35.185958  196946 pod_ready.go:86] duration metric: took 384.127336ms for pod "kube-controller-manager-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:35.386136  196946 pod_ready.go:83] waiting for pod "kube-proxy-52gzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:35.786367  196946 pod_ready.go:94] pod "kube-proxy-52gzz" is "Ready"
	I1102 13:29:35.786390  196946 pod_ready.go:86] duration metric: took 400.226356ms for pod "kube-proxy-52gzz" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:35.985388  196946 pod_ready.go:83] waiting for pod "kube-scheduler-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:36.385603  196946 pod_ready.go:94] pod "kube-scheduler-pause-058363" is "Ready"
	I1102 13:29:36.385635  196946 pod_ready.go:86] duration metric: took 400.219143ms for pod "kube-scheduler-pause-058363" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:29:36.385649  196946 pod_ready.go:40] duration metric: took 1.604513813s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:29:36.439205  196946 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:29:36.440958  196946 out.go:179] * Done! kubectl is now configured to use "pause-058363" cluster and "default" namespace by default
	I1102 13:29:33.971664  198322 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 13:29:33.971885  198322 start.go:159] libmachine.API.Create for "NoKubernetes-784609" (driver="docker")
	I1102 13:29:33.971928  198322 client.go:173] LocalClient.Create starting
	I1102 13:29:33.972001  198322 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 13:29:33.972042  198322 main.go:143] libmachine: Decoding PEM data...
	I1102 13:29:33.972067  198322 main.go:143] libmachine: Parsing certificate...
	I1102 13:29:33.972166  198322 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 13:29:33.972202  198322 main.go:143] libmachine: Decoding PEM data...
	I1102 13:29:33.972217  198322 main.go:143] libmachine: Parsing certificate...
	I1102 13:29:33.972651  198322 cli_runner.go:164] Run: docker network inspect NoKubernetes-784609 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:29:33.990283  198322 cli_runner.go:211] docker network inspect NoKubernetes-784609 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:29:33.990341  198322 network_create.go:284] running [docker network inspect NoKubernetes-784609] to gather additional debugging logs...
	I1102 13:29:33.990359  198322 cli_runner.go:164] Run: docker network inspect NoKubernetes-784609
	W1102 13:29:34.009328  198322 cli_runner.go:211] docker network inspect NoKubernetes-784609 returned with exit code 1
	I1102 13:29:34.009360  198322 network_create.go:287] error running [docker network inspect NoKubernetes-784609]: docker network inspect NoKubernetes-784609: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-784609 not found
	I1102 13:29:34.009377  198322 network_create.go:289] output of [docker network inspect NoKubernetes-784609]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-784609 not found
	
	** /stderr **
	I1102 13:29:34.009514  198322 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:29:34.026979  198322 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
	I1102 13:29:34.027671  198322 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6e64be95e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ec:8c:d9:e4:62} reservation:<nil>}
	I1102 13:29:34.028394  198322 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce0c0e777855 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:03:0f:01:14:50} reservation:<nil>}
	I1102 13:29:34.029168  198322 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5d4fa9956052 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:e3:3c:90:fa:68} reservation:<nil>}
	I1102 13:29:34.030264  198322 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ecb1e0}
	I1102 13:29:34.030292  198322 network_create.go:124] attempt to create docker network NoKubernetes-784609 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 13:29:34.030359  198322 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-784609 NoKubernetes-784609
	I1102 13:29:34.092949  198322 network_create.go:108] docker network NoKubernetes-784609 192.168.85.0/24 created
	I1102 13:29:34.092995  198322 kic.go:121] calculated static IP "192.168.85.2" for the "NoKubernetes-784609" container
	I1102 13:29:34.093083  198322 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:29:34.114847  198322 cli_runner.go:164] Run: docker volume create NoKubernetes-784609 --label name.minikube.sigs.k8s.io=NoKubernetes-784609 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:29:34.138136  198322 oci.go:103] Successfully created a docker volume NoKubernetes-784609
	I1102 13:29:34.138245  198322 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-784609-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-784609 --entrypoint /usr/bin/test -v NoKubernetes-784609:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:29:34.576012  198322 oci.go:107] Successfully prepared a docker volume NoKubernetes-784609
	I1102 13:29:34.576070  198322 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:29:34.576093  198322 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:29:34.576156  198322 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-784609:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.159218735Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.16033246Z" level=info msg="Conmon does support the --sync option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.160358815Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.160379575Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.161223451Z" level=info msg="Conmon does support the --sync option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.161240693Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166233479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166260519Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166935837Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.167496279Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.167556602Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.174043983Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.223466216Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ksfsc Namespace:kube-system ID:50bc6760d5e789be77afce727e70e0e048dd0aeada2eb709750ee84bb3f1ea82 UID:60dd1d3c-b924-49d8-8615-ccb815c2cd60 NetNS:/var/run/netns/7a61b6d5-a12b-4305-9cdd-e82fd8449fe1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c098}] Aliases:map[]}"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.223699856Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ksfsc for CNI network kindnet (type=ptp)"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.22431561Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224352303Z" level=info msg="Starting seccomp notifier watcher"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224428936Z" level=info msg="Create NRI interface"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224544611Z" level=info msg="built-in NRI default validator is disabled"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224554965Z" level=info msg="runtime interface created"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224591763Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224600742Z" level=info msg="runtime interface starting up..."
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224608393Z" level=info msg="starting plugins..."
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224625226Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.225069429Z" level=info msg="No systemd watchdog enabled"
	Nov 02 13:29:33 pause-058363 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ea36de4ed2b38       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   50bc6760d5e78       coredns-66bc5c9577-ksfsc               kube-system
	7722241f8df17       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   9bf57fa46c63e       kube-proxy-52gzz                       kube-system
	5ef665df9e1b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   e9b1830a1724e       kindnet-wb6rg                          kube-system
	1ea833aea99fd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   3704e98534dc2       kube-scheduler-pause-058363            kube-system
	1772e4fcefc4e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   1b2c551025cbb       etcd-pause-058363                      kube-system
	0b915dc404e29       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   1ef7018dc206f       kube-controller-manager-pause-058363   kube-system
	b01afa1c87394       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   7f691ea19fbb5       kube-apiserver-pause-058363            kube-system
	
	
	==> coredns [ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41684 - 48721 "HINFO IN 6016917602152271762.8827114019089312895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029790806s
	
	
	==> describe nodes <==
	Name:               pause-058363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-058363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=pause-058363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_29_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:29:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-058363
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-058363
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                39d80cdd-c2a4-41a8-b5a1-4e2090afffd2
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ksfsc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-058363                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-wb6rg                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-058363             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-058363    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-52gzz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-058363             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node pause-058363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node pause-058363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node pause-058363 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node pause-058363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node pause-058363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node pause-058363 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node pause-058363 event: Registered Node pause-058363 in Controller
	  Normal  NodeReady                12s                kubelet          Node pause-058363 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023935] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.640330] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 2 12:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.052730] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023920] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +2.047704] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +4.031606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +8.511092] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[ +16.382292] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 12:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	
	
	==> etcd [1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477] <==
	{"level":"warn","ts":"2025-11-02T13:29:07.380415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.389279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.404765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.413442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.421163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.430352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.438764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.447477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.456668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.466191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.478276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.495897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.503760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.518130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.533096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.572906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.587354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.602293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.608764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.619443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.637191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.650363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.658403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.666416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.753963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47838","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:29:40 up  1:12,  0 user,  load average: 4.01, 1.92, 1.31
	Linux pause-058363 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde] <==
	I1102 13:29:16.756299       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:29:16.848906       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:29:16.849065       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:29:16.849085       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:29:16.849120       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:29:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:29:17.148709       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:29:17.248906       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:29:17.248955       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:29:17.249969       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:29:17.350269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:29:17.350293       1 metrics.go:72] Registering metrics
	I1102 13:29:17.350335       1 controller.go:711] "Syncing nftables rules"
	I1102 13:29:27.054870       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:29:27.054930       1 main.go:301] handling current node
	I1102 13:29:37.060689       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:29:37.060733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b] <==
	E1102 13:29:08.473119       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1102 13:29:08.511558       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:29:08.523021       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:08.524837       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:29:08.535740       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:29:08.535863       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:29:08.536044       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:08.623097       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:29:09.321732       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:29:09.329837       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:29:09.329913       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:29:09.921349       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:29:09.959711       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:29:10.022749       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:29:10.029349       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 13:29:10.030681       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:29:10.035659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:29:10.371100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:29:11.037646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:29:11.059965       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:29:11.075588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:29:16.120743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:16.124412       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:16.217729       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:29:16.419310       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be] <==
	I1102 13:29:15.320630       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:29:15.321711       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:29:15.325780       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:29:15.326144       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-058363" podCIDRs=["10.244.0.0/24"]
	I1102 13:29:15.364871       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:29:15.365962       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:29:15.366006       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:29:15.366056       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:29:15.366189       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:29:15.367027       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:29:15.367116       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:29:15.367303       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:29:15.369310       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:29:15.369556       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:29:15.371880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:29:15.371922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:29:15.371931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:29:15.371932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:29:15.373108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:29:15.374283       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:29:15.376473       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:29:15.376521       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:29:15.377701       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:29:15.396385       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:29:30.319069       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4] <==
	I1102 13:29:16.653717       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:29:16.716093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:29:16.816255       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:29:16.816291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 13:29:16.816392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:29:16.834788       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:29:16.834843       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:29:16.839810       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:29:16.840125       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:29:16.840164       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:29:16.841479       1 config.go:200] "Starting service config controller"
	I1102 13:29:16.841495       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:29:16.841518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:29:16.841518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:29:16.841532       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:29:16.841556       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:29:16.841626       1 config.go:309] "Starting node config controller"
	I1102 13:29:16.841678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:29:16.941703       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:29:16.941730       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:29:16.941687       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:29:16.941687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625] <==
	E1102 13:29:08.400291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:29:08.400320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:29:08.400393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:29:08.400465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:29:08.400505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:29:08.400521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:29:08.400170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:29:08.400558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:29:08.401065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:29:09.242424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:29:09.270124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:29:09.303980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:29:09.304687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:29:09.359124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:29:09.377793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:29:09.423314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:29:09.448494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:29:09.460446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:29:09.526692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:29:09.560197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:29:09.587920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:29:09.588007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:29:09.598931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 13:29:09.720844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1102 13:29:12.191074       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:29:15 pause-058363 kubelet[1349]: I1102 13:29:15.394066    1349 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 13:29:15 pause-058363 kubelet[1349]: I1102 13:29:15.394772    1349 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.242860    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd150b0f-f23c-4501-b9b2-6f7419f54c19-kube-proxy\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.242927    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd150b0f-f23c-4501-b9b2-6f7419f54c19-lib-modules\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.243006    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd150b0f-f23c-4501-b9b2-6f7419f54c19-xtables-lock\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.243053    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2nz\" (UniqueName: \"kubernetes.io/projected/fd150b0f-f23c-4501-b9b2-6f7419f54c19-kube-api-access-vn2nz\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343559    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-cni-cfg\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343689    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-xtables-lock\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343771    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf5h\" (UniqueName: \"kubernetes.io/projected/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-kube-api-access-wqf5h\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343826    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-lib-modules\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:17 pause-058363 kubelet[1349]: I1102 13:29:17.094822    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52gzz" podStartSLOduration=1.09480605 podStartE2EDuration="1.09480605s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:17.094803807 +0000 UTC m=+6.238762924" watchObservedRunningTime="2025-11-02 13:29:17.09480605 +0000 UTC m=+6.238765166"
	Nov 02 13:29:23 pause-058363 kubelet[1349]: I1102 13:29:23.451784    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wb6rg" podStartSLOduration=7.451764215 podStartE2EDuration="7.451764215s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:17.131806034 +0000 UTC m=+6.275765170" watchObservedRunningTime="2025-11-02 13:29:23.451764215 +0000 UTC m=+12.595723332"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.445803    1349 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.526187    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60dd1d3c-b924-49d8-8615-ccb815c2cd60-config-volume\") pod \"coredns-66bc5c9577-ksfsc\" (UID: \"60dd1d3c-b924-49d8-8615-ccb815c2cd60\") " pod="kube-system/coredns-66bc5c9577-ksfsc"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.526250    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mfl\" (UniqueName: \"kubernetes.io/projected/60dd1d3c-b924-49d8-8615-ccb815c2cd60-kube-api-access-65mfl\") pod \"coredns-66bc5c9577-ksfsc\" (UID: \"60dd1d3c-b924-49d8-8615-ccb815c2cd60\") " pod="kube-system/coredns-66bc5c9577-ksfsc"
	Nov 02 13:29:28 pause-058363 kubelet[1349]: I1102 13:29:28.120653    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ksfsc" podStartSLOduration=12.120629238 podStartE2EDuration="12.120629238s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:28.120597245 +0000 UTC m=+17.264556355" watchObservedRunningTime="2025-11-02 13:29:28.120629238 +0000 UTC m=+17.264588355"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: W1102 13:29:33.120986    1349 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121096    1349 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121187    1349 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121200    1349 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 02 13:29:36 pause-058363 kubelet[1349]: I1102 13:29:36.917823    1349 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:29:36 pause-058363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:29:36 pause-058363 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:29:36 pause-058363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:29:36 pause-058363 systemd[1]: kubelet.service: Consumed 1.123s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-058363 -n pause-058363
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-058363 -n pause-058363: exit status 2 (383.766311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-058363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-058363
helpers_test.go:243: (dbg) docker inspect pause-058363:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853",
	        "Created": "2025-11-02T13:28:52.746063497Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:28:52.799365653Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/hostname",
	        "HostsPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/hosts",
	        "LogPath": "/var/lib/docker/containers/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853/dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853-json.log",
	        "Name": "/pause-058363",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-058363:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-058363",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dba5735002b3975f3cfbecfd3718f4b7dddc9100f88429f402e1dc08b6237853",
	                "LowerDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/32ab61b585bf012734c875a094781ff9579835c319815042251082309a468b3d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-058363",
	                "Source": "/var/lib/docker/volumes/pause-058363/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-058363",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-058363",
	                "name.minikube.sigs.k8s.io": "pause-058363",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d8305ca2dd22aa006dfa6189cebce882678f3cb0cd9f45eda73bab5a0af2422",
	            "SandboxKey": "/var/run/docker/netns/9d8305ca2dd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-058363": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:1f:9b:41:d5:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5d4fa9956052dd56822e5a32d1ec3f05944e3caba0d017a50ac35c82a56b0508",
	                    "EndpointID": "0f4add9b39a23025a756a66f3d112e731db88166d91671640d7666f6aebb93a1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-058363",
	                        "dba5735002b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-058363 -n pause-058363
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-058363 -n pause-058363: exit status 2 (367.71231ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-058363 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-744353 --schedule 5m                                                                                                                                                                                    │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │ 02 Nov 25 13:27 UTC │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │                     │
	│ stop    │ -p scheduled-stop-744353 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:27 UTC │ 02 Nov 25 13:27 UTC │
	│ delete  │ -p scheduled-stop-744353                                                                                                                                                                                                  │ scheduled-stop-744353       │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:28 UTC │
	│ start   │ -p insufficient-storage-449768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-449768 │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │                     │
	│ delete  │ -p insufficient-storage-449768                                                                                                                                                                                            │ insufficient-storage-449768 │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:28 UTC │
	│ start   │ -p cert-expiration-110310 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-110310      │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p offline-crio-063012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-063012         │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p pause-058363 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p force-systemd-env-091295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-091295    │ jenkins │ v1.37.0 │ 02 Nov 25 13:28 UTC │ 02 Nov 25 13:29 UTC │
	│ delete  │ -p force-systemd-env-091295                                                                                                                                                                                               │ force-systemd-env-091295    │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p force-systemd-flag-600209 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p pause-058363 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ delete  │ -p offline-crio-063012                                                                                                                                                                                                    │ offline-crio-063012         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p NoKubernetes-784609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-784609         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ start   │ -p NoKubernetes-784609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-784609         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ ssh     │ force-systemd-flag-600209 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ pause   │ -p pause-058363 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-058363                │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	│ delete  │ -p force-systemd-flag-600209                                                                                                                                                                                              │ force-systemd-flag-600209   │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │ 02 Nov 25 13:29 UTC │
	│ start   │ -p cert-options-514605 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-514605         │ jenkins │ v1.37.0 │ 02 Nov 25 13:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:29:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:29:39.890867  200467 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:29:39.891128  200467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:39.891132  200467 out.go:374] Setting ErrFile to fd 2...
	I1102 13:29:39.891135  200467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:39.891363  200467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:29:39.891910  200467 out.go:368] Setting JSON to false
	I1102 13:29:39.892938  200467 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4332,"bootTime":1762085848,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:29:39.893017  200467 start.go:143] virtualization: kvm guest
	I1102 13:29:39.894945  200467 out.go:179] * [cert-options-514605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:29:39.896554  200467 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:29:39.896593  200467 notify.go:221] Checking for updates...
	I1102 13:29:39.899458  200467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:29:39.901530  200467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:29:39.903913  200467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:29:39.905432  200467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:29:39.906811  200467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:29:39.909097  200467 config.go:182] Loaded profile config "NoKubernetes-784609": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:39.909232  200467 config.go:182] Loaded profile config "cert-expiration-110310": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:39.909401  200467 config.go:182] Loaded profile config "pause-058363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:39.909513  200467 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:29:39.938283  200467 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:29:39.938370  200467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:40.011887  200467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:29:39.999837308 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:40.012034  200467 docker.go:319] overlay module found
	I1102 13:29:40.015062  200467 out.go:179] * Using the docker driver based on user configuration
	I1102 13:29:40.016373  200467 start.go:309] selected driver: docker
	I1102 13:29:40.016383  200467 start.go:930] validating driver "docker" against <nil>
	I1102 13:29:40.016397  200467 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:29:40.017234  200467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:40.095131  200467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:29:40.084239022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:40.095384  200467 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:29:40.095695  200467 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 13:29:40.096983  200467 out.go:179] * Using Docker driver with root privileges
	I1102 13:29:40.098147  200467 cni.go:84] Creating CNI manager for ""
	I1102 13:29:40.098220  200467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:29:40.098227  200467 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:29:40.098298  200467 start.go:353] cluster config:
	{Name:cert-options-514605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-options-514605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.
0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInter
val:1m0s}
	I1102 13:29:40.099467  200467 out.go:179] * Starting "cert-options-514605" primary control-plane node in "cert-options-514605" cluster
	I1102 13:29:40.100400  200467 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:29:40.101484  200467 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:29:40.102498  200467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:29:40.102530  200467 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:29:40.102539  200467 cache.go:59] Caching tarball of preloaded images
	I1102 13:29:40.102622  200467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:29:40.102658  200467 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:29:40.102666  200467 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:29:40.102787  200467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-options-514605/config.json ...
	I1102 13:29:40.102804  200467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-options-514605/config.json: {Name:mk9c799e8ca4feae4d0209be8ada4ecb3e680bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:29:40.127917  200467 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:29:40.127939  200467 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:29:40.127960  200467 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:29:40.127995  200467 start.go:360] acquireMachinesLock for cert-options-514605: {Name:mk713d3445da6375f9fb5cd9cbb3e969dd5ff3c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:29:40.128109  200467 start.go:364] duration metric: took 97.758µs to acquireMachinesLock for "cert-options-514605"
	I1102 13:29:40.128135  200467 start.go:93] Provisioning new machine with config: &{Name:cert-options-514605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-options-514605 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:29:40.128225  200467 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.159218735Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.16033246Z" level=info msg="Conmon does support the --sync option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.160358815Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.160379575Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.161223451Z" level=info msg="Conmon does support the --sync option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.161240693Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166233479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166260519Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.166935837Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.167496279Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.167556602Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.174043983Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.223466216Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ksfsc Namespace:kube-system ID:50bc6760d5e789be77afce727e70e0e048dd0aeada2eb709750ee84bb3f1ea82 UID:60dd1d3c-b924-49d8-8615-ccb815c2cd60 NetNS:/var/run/netns/7a61b6d5-a12b-4305-9cdd-e82fd8449fe1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c098}] Aliases:map[]}"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.223699856Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ksfsc for CNI network kindnet (type=ptp)"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.22431561Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224352303Z" level=info msg="Starting seccomp notifier watcher"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224428936Z" level=info msg="Create NRI interface"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224544611Z" level=info msg="built-in NRI default validator is disabled"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224554965Z" level=info msg="runtime interface created"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224591763Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224600742Z" level=info msg="runtime interface starting up..."
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224608393Z" level=info msg="starting plugins..."
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.224625226Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 02 13:29:33 pause-058363 crio[2214]: time="2025-11-02T13:29:33.225069429Z" level=info msg="No systemd watchdog enabled"
	Nov 02 13:29:33 pause-058363 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ea36de4ed2b38       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   50bc6760d5e78       coredns-66bc5c9577-ksfsc               kube-system
	7722241f8df17       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   9bf57fa46c63e       kube-proxy-52gzz                       kube-system
	5ef665df9e1b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   e9b1830a1724e       kindnet-wb6rg                          kube-system
	1ea833aea99fd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   3704e98534dc2       kube-scheduler-pause-058363            kube-system
	1772e4fcefc4e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   1b2c551025cbb       etcd-pause-058363                      kube-system
	0b915dc404e29       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   1ef7018dc206f       kube-controller-manager-pause-058363   kube-system
	b01afa1c87394       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   7f691ea19fbb5       kube-apiserver-pause-058363            kube-system
	
	
	==> coredns [ea36de4ed2b38b9e90147552118fa25e5041e8fd86f0ee201615a3516abe2b58] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41684 - 48721 "HINFO IN 6016917602152271762.8827114019089312895. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029790806s
	
	
	==> describe nodes <==
	Name:               pause-058363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-058363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=pause-058363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_29_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:29:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-058363
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:29:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:29:27 +0000   Sun, 02 Nov 2025 13:29:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-058363
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                39d80cdd-c2a4-41a8-b5a1-4e2090afffd2
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ksfsc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-058363                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-wb6rg                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-058363             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-058363    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-52gzz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-058363             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-058363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-058363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-058363 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-058363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-058363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-058363 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-058363 event: Registered Node pause-058363 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-058363 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083631] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023935] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.640330] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 2 12:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.052730] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +1.023920] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +2.047704] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +4.031606] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[  +8.511092] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[ +16.382292] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 12:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	
	
	==> etcd [1772e4fcefc4ec45afdb78b4ef07f90b3376d0d44849d8ee3df16cdb2836b477] <==
	{"level":"warn","ts":"2025-11-02T13:29:07.380415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.389279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.404765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.413442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.421163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.430352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.438764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.447477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.456668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.466191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.478276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.495897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.503760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.518130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.533096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.572906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.587354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.602293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.608764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.619443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.637191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.650363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.658403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.666416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:29:07.753963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47838","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:29:41 up  1:12,  0 user,  load average: 4.01, 1.92, 1.31
	Linux pause-058363 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ef665df9e1b2415548effe9ef112693ed07ea63c9127c6431a2d46d57c7cbde] <==
	I1102 13:29:16.756299       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:29:16.848906       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:29:16.849065       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:29:16.849085       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:29:16.849120       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:29:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:29:17.148709       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:29:17.248906       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:29:17.248955       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:29:17.249969       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:29:17.350269       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:29:17.350293       1 metrics.go:72] Registering metrics
	I1102 13:29:17.350335       1 controller.go:711] "Syncing nftables rules"
	I1102 13:29:27.054870       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:29:27.054930       1 main.go:301] handling current node
	I1102 13:29:37.060689       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:29:37.060733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b01afa1c8739478af11dad3bec848b987c29f9e36ec449e9169e8bb3af1b842b] <==
	E1102 13:29:08.473119       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1102 13:29:08.511558       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:29:08.523021       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:08.524837       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:29:08.535740       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:29:08.535863       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:29:08.536044       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:08.623097       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:29:09.321732       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:29:09.329837       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:29:09.329913       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:29:09.921349       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:29:09.959711       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:29:10.022749       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:29:10.029349       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 13:29:10.030681       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:29:10.035659       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:29:10.371100       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:29:11.037646       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:29:11.059965       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:29:11.075588       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:29:16.120743       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:16.124412       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:29:16.217729       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:29:16.419310       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0b915dc404e29c4936639ff0ea3fd53c95e4220a98686b075b5dcc6cbf2803be] <==
	I1102 13:29:15.320630       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:29:15.321711       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:29:15.325780       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:29:15.326144       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-058363" podCIDRs=["10.244.0.0/24"]
	I1102 13:29:15.364871       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:29:15.365962       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:29:15.366006       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:29:15.366056       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:29:15.366189       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:29:15.367027       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:29:15.367116       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:29:15.367303       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:29:15.369310       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:29:15.369556       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:29:15.371880       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:29:15.371922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:29:15.371931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:29:15.371932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:29:15.373108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:29:15.374283       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:29:15.376473       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:29:15.376521       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:29:15.377701       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:29:15.396385       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:29:30.319069       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7722241f8df17104a84d22acfefc49d3d8945a84158e17d4c4b789d78ecf2fe4] <==
	I1102 13:29:16.653717       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:29:16.716093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:29:16.816255       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:29:16.816291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 13:29:16.816392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:29:16.834788       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:29:16.834843       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:29:16.839810       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:29:16.840125       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:29:16.840164       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:29:16.841479       1 config.go:200] "Starting service config controller"
	I1102 13:29:16.841495       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:29:16.841518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:29:16.841518       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:29:16.841532       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:29:16.841556       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:29:16.841626       1 config.go:309] "Starting node config controller"
	I1102 13:29:16.841678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:29:16.941703       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:29:16.941730       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:29:16.941687       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:29:16.941687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1ea833aea99fd48eed22958ef01c3c5fb1ce73ef434404235f5760cee4d2f625] <==
	E1102 13:29:08.400291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:29:08.400320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:29:08.400393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:29:08.400465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:29:08.400505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:29:08.400521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:29:08.400170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:29:08.400558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:29:08.401065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:29:09.242424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:29:09.270124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:29:09.303980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:29:09.304687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:29:09.359124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:29:09.377793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:29:09.423314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:29:09.448494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:29:09.460446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:29:09.526692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:29:09.560197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:29:09.587920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:29:09.588007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:29:09.598931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 13:29:09.720844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1102 13:29:12.191074       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:29:15 pause-058363 kubelet[1349]: I1102 13:29:15.394066    1349 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 13:29:15 pause-058363 kubelet[1349]: I1102 13:29:15.394772    1349 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.242860    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd150b0f-f23c-4501-b9b2-6f7419f54c19-kube-proxy\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.242927    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd150b0f-f23c-4501-b9b2-6f7419f54c19-lib-modules\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.243006    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd150b0f-f23c-4501-b9b2-6f7419f54c19-xtables-lock\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.243053    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2nz\" (UniqueName: \"kubernetes.io/projected/fd150b0f-f23c-4501-b9b2-6f7419f54c19-kube-api-access-vn2nz\") pod \"kube-proxy-52gzz\" (UID: \"fd150b0f-f23c-4501-b9b2-6f7419f54c19\") " pod="kube-system/kube-proxy-52gzz"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343559    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-cni-cfg\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343689    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-xtables-lock\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343771    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf5h\" (UniqueName: \"kubernetes.io/projected/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-kube-api-access-wqf5h\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:16 pause-058363 kubelet[1349]: I1102 13:29:16.343826    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e-lib-modules\") pod \"kindnet-wb6rg\" (UID: \"ee4dea9b-4d99-4d9b-b4ae-efceae5abd4e\") " pod="kube-system/kindnet-wb6rg"
	Nov 02 13:29:17 pause-058363 kubelet[1349]: I1102 13:29:17.094822    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52gzz" podStartSLOduration=1.09480605 podStartE2EDuration="1.09480605s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:17.094803807 +0000 UTC m=+6.238762924" watchObservedRunningTime="2025-11-02 13:29:17.09480605 +0000 UTC m=+6.238765166"
	Nov 02 13:29:23 pause-058363 kubelet[1349]: I1102 13:29:23.451784    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wb6rg" podStartSLOduration=7.451764215 podStartE2EDuration="7.451764215s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:17.131806034 +0000 UTC m=+6.275765170" watchObservedRunningTime="2025-11-02 13:29:23.451764215 +0000 UTC m=+12.595723332"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.445803    1349 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.526187    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60dd1d3c-b924-49d8-8615-ccb815c2cd60-config-volume\") pod \"coredns-66bc5c9577-ksfsc\" (UID: \"60dd1d3c-b924-49d8-8615-ccb815c2cd60\") " pod="kube-system/coredns-66bc5c9577-ksfsc"
	Nov 02 13:29:27 pause-058363 kubelet[1349]: I1102 13:29:27.526250    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mfl\" (UniqueName: \"kubernetes.io/projected/60dd1d3c-b924-49d8-8615-ccb815c2cd60-kube-api-access-65mfl\") pod \"coredns-66bc5c9577-ksfsc\" (UID: \"60dd1d3c-b924-49d8-8615-ccb815c2cd60\") " pod="kube-system/coredns-66bc5c9577-ksfsc"
	Nov 02 13:29:28 pause-058363 kubelet[1349]: I1102 13:29:28.120653    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ksfsc" podStartSLOduration=12.120629238 podStartE2EDuration="12.120629238s" podCreationTimestamp="2025-11-02 13:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:29:28.120597245 +0000 UTC m=+17.264556355" watchObservedRunningTime="2025-11-02 13:29:28.120629238 +0000 UTC m=+17.264588355"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: W1102 13:29:33.120986    1349 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121096    1349 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121187    1349 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 02 13:29:33 pause-058363 kubelet[1349]: E1102 13:29:33.121200    1349 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 02 13:29:36 pause-058363 kubelet[1349]: I1102 13:29:36.917823    1349 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:29:36 pause-058363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:29:36 pause-058363 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:29:36 pause-058363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:29:36 pause-058363 systemd[1]: kubelet.service: Consumed 1.123s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-058363 -n pause-058363
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-058363 -n pause-058363: exit status 2 (353.150803ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-058363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (305.855526ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:35:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-054159 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-054159 describe deploy/metrics-server -n kube-system: exit status 1 (64.215725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-054159 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054159
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	        "Created": "2025-11-02T13:34:24.271262498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286424,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:34:24.31858015Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hosts",
	        "LogPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066-json.log",
	        "Name": "/old-k8s-version-054159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	                "LowerDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054159",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35416dfc31ac2eff87438a1ab08dcdb49f863b0c142371da9501721c24f34e60",
	            "SandboxKey": "/var/run/docker/netns/35416dfc31ac",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-054159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:46:8a:45:12:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4ae33975e63c84f4b70da6cb2d4c25dac69220c357b8926c3be9f60de4d8948a",
	                    "EndpointID": "bf63e1d25ec58bc6014974cf80f167987ba9fece1f5d234b16ff2faa3d8d2dbe",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054159",
	                        "a6f2405feedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25: (1.402640432s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-123357 sudo cat /etc/nsswitch.conf                                                                                                │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /etc/hosts                                                                                                        │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /etc/resolv.conf                                                                                                  │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo crictl pods                                                                                                           │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo crictl ps --all                                                                                                       │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo ip a s                                                                                                                │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo ip r s                                                                                                                │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo iptables-save                                                                                                         │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo iptables -t nat -L -n -v                                                                                              │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /run/flannel/subnet.env                                                                                           │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /etc/kube-flannel/cni-conf.json                                                                                   │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p flannel-123357 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p flannel-123357 sudo systemctl cat docker --no-pager                                                                                       │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p flannel-123357 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 pgrep -a kubelet                                                                                                            │ bridge-123357          │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p flannel-123357 sudo docker system info                                                                                                    │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-054159 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-054159 │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p flannel-123357 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p flannel-123357 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-123357         │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:34:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:34:18.043092  285056 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:34:18.043246  285056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:34:18.043259  285056 out.go:374] Setting ErrFile to fd 2...
	I1102 13:34:18.043265  285056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:34:18.043598  285056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:34:18.044193  285056 out.go:368] Setting JSON to false
	I1102 13:34:18.045426  285056 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4610,"bootTime":1762085848,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:34:18.045508  285056 start.go:143] virtualization: kvm guest
	I1102 13:34:18.047916  285056 out.go:179] * [old-k8s-version-054159] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:34:18.049370  285056 notify.go:221] Checking for updates...
	I1102 13:34:18.049421  285056 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:34:18.050850  285056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:34:18.052331  285056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:34:18.055126  285056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:34:18.056441  285056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:34:18.058671  285056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:34:18.060355  285056 config.go:182] Loaded profile config "bridge-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:34:18.060475  285056 config.go:182] Loaded profile config "flannel-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:34:18.060600  285056 config.go:182] Loaded profile config "kubernetes-upgrade-273161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:34:18.060730  285056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:34:18.089960  285056 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:34:18.090055  285056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:34:18.172264  285056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:34:18.161576698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:34:18.172376  285056 docker.go:319] overlay module found
	I1102 13:34:18.174115  285056 out.go:179] * Using the docker driver based on user configuration
	I1102 13:34:18.175333  285056 start.go:309] selected driver: docker
	I1102 13:34:18.175347  285056 start.go:930] validating driver "docker" against <nil>
	I1102 13:34:18.175359  285056 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:34:18.175898  285056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:34:18.239127  285056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:34:18.226923862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:34:18.239304  285056 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:34:18.239538  285056 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:34:18.241139  285056 out.go:179] * Using Docker driver with root privileges
	I1102 13:34:18.242251  285056 cni.go:84] Creating CNI manager for ""
	I1102 13:34:18.242333  285056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:34:18.242345  285056 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:34:18.242415  285056 start.go:353] cluster config:
	{Name:old-k8s-version-054159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-054159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:34:18.243925  285056 out.go:179] * Starting "old-k8s-version-054159" primary control-plane node in "old-k8s-version-054159" cluster
	I1102 13:34:18.245068  285056 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:34:18.246400  285056 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:34:18.247479  285056 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:34:18.247505  285056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:34:18.247530  285056 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1102 13:34:18.247543  285056 cache.go:59] Caching tarball of preloaded images
	I1102 13:34:18.247665  285056 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:34:18.247682  285056 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1102 13:34:18.247794  285056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/config.json ...
	I1102 13:34:18.247819  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/config.json: {Name:mka0ed78ad55a6e0a4728113f381dd4aeea266f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:18.269868  285056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:34:18.269894  285056 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:34:18.269914  285056 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:34:18.269948  285056 start.go:360] acquireMachinesLock for old-k8s-version-054159: {Name:mk5da25ada32701ecabd34bf4b370c6601c5b561 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:34:18.270061  285056 start.go:364] duration metric: took 89.282µs to acquireMachinesLock for "old-k8s-version-054159"
	I1102 13:34:18.270092  285056 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-054159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-054159 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:34:18.270195  285056 start.go:125] createHost starting for "" (driver="docker")
	I1102 13:34:19.380586  272706 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:34:19.380653  272706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:34:19.380781  272706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:34:19.380874  272706 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:34:19.380929  272706 kubeadm.go:319] OS: Linux
	I1102 13:34:19.381007  272706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:34:19.381079  272706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:34:19.381160  272706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:34:19.381237  272706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:34:19.381310  272706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:34:19.381384  272706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:34:19.381464  272706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:34:19.381537  272706 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:34:19.381650  272706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:34:19.381753  272706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:34:19.381883  272706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:34:19.381971  272706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:34:19.383894  272706 out.go:252]   - Generating certificates and keys ...
	I1102 13:34:19.383998  272706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:34:19.384100  272706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:34:19.384185  272706 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:34:19.384272  272706 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:34:19.384367  272706 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:34:19.384441  272706 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:34:19.384530  272706 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:34:19.384722  272706 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-123357 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1102 13:34:19.384789  272706 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:34:19.384949  272706 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-123357 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1102 13:34:19.385044  272706 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:34:19.385150  272706 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:34:19.385222  272706 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:34:19.385324  272706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:34:19.385400  272706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:34:19.385494  272706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:34:19.385601  272706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:34:19.385707  272706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:34:19.385783  272706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:34:19.385898  272706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:34:19.386018  272706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:34:19.390144  272706 out.go:252]   - Booting up control plane ...
	I1102 13:34:19.390279  272706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:34:19.390369  272706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:34:19.390427  272706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:34:19.390524  272706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:34:19.390660  272706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:34:19.390798  272706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:34:19.390914  272706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:34:19.390992  272706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:34:19.391182  272706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:34:19.391335  272706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:34:19.391427  272706 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501635886s
	I1102 13:34:19.391554  272706 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:34:19.391669  272706 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1102 13:34:19.391806  272706 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:34:19.391915  272706 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:34:19.392015  272706 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.622814514s
	I1102 13:34:19.392120  272706 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.112885494s
	I1102 13:34:19.392230  272706 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002078588s
	I1102 13:34:19.392397  272706 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:34:19.392587  272706 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:34:19.392672  272706 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:34:19.392940  272706 kubeadm.go:319] [mark-control-plane] Marking the node flannel-123357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:34:19.393050  272706 kubeadm.go:319] [bootstrap-token] Using token: 9seldr.rwp4cbbywtvyr422
	I1102 13:34:19.399445  272706 out.go:252]   - Configuring RBAC rules ...
	I1102 13:34:19.399622  272706 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:34:19.399731  272706 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:34:19.399935  272706 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:34:19.400147  272706 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:34:19.400345  272706 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:34:19.400470  272706 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:34:19.400617  272706 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:34:19.400671  272706 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:34:19.400726  272706 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:34:19.400735  272706 kubeadm.go:319] 
	I1102 13:34:19.400812  272706 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:34:19.400821  272706 kubeadm.go:319] 
	I1102 13:34:19.400923  272706 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:34:19.400932  272706 kubeadm.go:319] 
	I1102 13:34:19.400963  272706 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:34:19.401091  272706 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:34:19.401169  272706 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:34:19.401177  272706 kubeadm.go:319] 
	I1102 13:34:19.401245  272706 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:34:19.401256  272706 kubeadm.go:319] 
	I1102 13:34:19.401323  272706 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:34:19.401329  272706 kubeadm.go:319] 
	I1102 13:34:19.401380  272706 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:34:19.401441  272706 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:34:19.401497  272706 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:34:19.401500  272706 kubeadm.go:319] 
	I1102 13:34:19.401597  272706 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:34:19.401680  272706 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:34:19.401693  272706 kubeadm.go:319] 
	I1102 13:34:19.401796  272706 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9seldr.rwp4cbbywtvyr422 \
	I1102 13:34:19.401936  272706 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:34:19.401974  272706 kubeadm.go:319] 	--control-plane 
	I1102 13:34:19.401981  272706 kubeadm.go:319] 
	I1102 13:34:19.402114  272706 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:34:19.402129  272706 kubeadm.go:319] 
	I1102 13:34:19.402235  272706 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9seldr.rwp4cbbywtvyr422 \
	I1102 13:34:19.402379  272706 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:34:19.402389  272706 cni.go:84] Creating CNI manager for "flannel"
	I1102 13:34:19.404054  272706 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1102 13:34:15.306407  278482 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357 for IP: 192.168.103.2
	I1102 13:34:15.306431  278482 certs.go:195] generating shared ca certs ...
	I1102 13:34:15.306450  278482 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.306610  278482 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:34:15.306660  278482 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:34:15.306669  278482 certs.go:257] generating profile certs ...
	I1102 13:34:15.306732  278482 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.key
	I1102 13:34:15.306745  278482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.crt with IP's: []
	I1102 13:34:15.454133  278482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.crt ...
	I1102 13:34:15.454162  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.crt: {Name:mk5a18727559d6704be07a1cc209d6ce1c1706e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.454377  278482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.key ...
	I1102 13:34:15.454400  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/client.key: {Name:mkf3a9b7e4a0c2c1ec8b56cfbfc7c066c2e62125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.454536  278482 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key.5439148c
	I1102 13:34:15.454554  278482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt.5439148c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1102 13:34:15.716573  278482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt.5439148c ...
	I1102 13:34:15.716603  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt.5439148c: {Name:mk37b5bed26d657472cd986a5f5f317867c31160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.716790  278482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key.5439148c ...
	I1102 13:34:15.716807  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key.5439148c: {Name:mk3082be17aff3da1ce4426cbac53b59e5a2c602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.716911  278482 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt.5439148c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt
	I1102 13:34:15.717014  278482 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key.5439148c -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key
	I1102 13:34:15.717086  278482 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.key
	I1102 13:34:15.717102  278482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.crt with IP's: []
	I1102 13:34:15.785509  278482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.crt ...
	I1102 13:34:15.785539  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.crt: {Name:mkd7b1f518480ee91eb21a296d18c4706202b01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.785723  278482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.key ...
	I1102 13:34:15.785741  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.key: {Name:mkc508e4a580a4e89394802bf550e2ac29538a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:15.785971  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:34:15.786022  278482 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:34:15.786051  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:34:15.786132  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:34:15.786166  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:34:15.786199  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:34:15.786254  278482 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:34:15.786883  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:34:15.817360  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:34:15.837751  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:34:15.859541  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:34:15.902490  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 13:34:15.940026  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:34:15.963139  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:34:15.983040  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/bridge-123357/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:34:16.004377  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:34:16.039412  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:34:16.067030  278482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:34:16.105021  278482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:34:16.126511  278482 ssh_runner.go:195] Run: openssl version
	I1102 13:34:16.135093  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:34:16.145791  278482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:34:16.150473  278482 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:34:16.150551  278482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:34:16.198741  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:34:16.208810  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:34:16.219315  278482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:34:16.224228  278482 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:34:16.224291  278482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:34:16.271753  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:34:16.281725  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:34:16.292540  278482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:16.296793  278482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:16.296842  278482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:16.335023  278482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:34:16.343808  278482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:34:16.347876  278482 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:34:16.347937  278482 kubeadm.go:401] StartCluster: {Name:bridge-123357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-123357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:34:16.348020  278482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:34:16.348095  278482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:34:16.374515  278482 cri.go:89] found id: ""
	I1102 13:34:16.374609  278482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:34:16.382712  278482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:34:16.390480  278482 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:34:16.390557  278482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:34:16.398901  278482 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:34:16.398920  278482 kubeadm.go:158] found existing configuration files:
	
	I1102 13:34:16.398966  278482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 13:34:16.407502  278482 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:34:16.407559  278482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:34:16.414663  278482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 13:34:16.423100  278482 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:34:16.423146  278482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:34:16.430582  278482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 13:34:16.437944  278482 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:34:16.437993  278482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:34:16.445312  278482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 13:34:16.452606  278482 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:34:16.452655  278482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:34:16.459711  278482 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:34:16.521872  278482 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:34:16.581640  278482 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:34:19.405544  272706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:34:19.410802  272706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:34:19.410820  272706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1102 13:34:19.424666  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:34:19.802423  272706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:34:19.802537  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:19.802648  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-123357 minikube.k8s.io/updated_at=2025_11_02T13_34_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=flannel-123357 minikube.k8s.io/primary=true
	I1102 13:34:19.888424  272706 ops.go:34] apiserver oom_adj: -16
	I1102 13:34:19.888488  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:20.389417  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:18.991715  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:18.992138  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:18.992189  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:18.992245  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:19.029189  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:19.029468  231852 cri.go:89] found id: ""
	I1102 13:34:19.029489  231852 logs.go:282] 1 containers: [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:19.029546  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:19.035279  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:19.035412  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:19.068711  231852 cri.go:89] found id: ""
	I1102 13:34:19.068741  231852 logs.go:282] 0 containers: []
	W1102 13:34:19.068753  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:19.068761  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:19.068828  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:19.100734  231852 cri.go:89] found id: ""
	I1102 13:34:19.100759  231852 logs.go:282] 0 containers: []
	W1102 13:34:19.100770  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:19.100777  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:19.100830  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:19.134665  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:19.134693  231852 cri.go:89] found id: ""
	I1102 13:34:19.134703  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:19.134759  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:19.138752  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:19.138816  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:19.171124  231852 cri.go:89] found id: ""
	I1102 13:34:19.171156  231852 logs.go:282] 0 containers: []
	W1102 13:34:19.171167  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:19.171175  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:19.171236  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:19.205314  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:19.205341  231852 cri.go:89] found id: ""
	I1102 13:34:19.205351  231852 logs.go:282] 1 containers: [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:19.205414  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:19.209894  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:19.209955  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:19.240303  231852 cri.go:89] found id: ""
	I1102 13:34:19.240330  231852 logs.go:282] 0 containers: []
	W1102 13:34:19.240341  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:19.240349  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:19.240407  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:19.268368  231852 cri.go:89] found id: ""
	I1102 13:34:19.268393  231852 logs.go:282] 0 containers: []
	W1102 13:34:19.268400  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:19.268410  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:19.268421  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:34:19.338852  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:34:19.338872  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:19.338885  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:19.373805  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:19.373841  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:19.438899  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:19.438936  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:19.474190  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:19.474221  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:19.544572  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:19.544605  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:19.580933  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:19.580968  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:19.715202  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:19.715240  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:18.272174  285056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 13:34:18.272400  285056 start.go:159] libmachine.API.Create for "old-k8s-version-054159" (driver="docker")
	I1102 13:34:18.272430  285056 client.go:173] LocalClient.Create starting
	I1102 13:34:18.272489  285056 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 13:34:18.272516  285056 main.go:143] libmachine: Decoding PEM data...
	I1102 13:34:18.272534  285056 main.go:143] libmachine: Parsing certificate...
	I1102 13:34:18.272694  285056 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 13:34:18.272723  285056 main.go:143] libmachine: Decoding PEM data...
	I1102 13:34:18.272733  285056 main.go:143] libmachine: Parsing certificate...
	I1102 13:34:18.273045  285056 cli_runner.go:164] Run: docker network inspect old-k8s-version-054159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:34:18.292299  285056 cli_runner.go:211] docker network inspect old-k8s-version-054159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:34:18.292380  285056 network_create.go:284] running [docker network inspect old-k8s-version-054159] to gather additional debugging logs...
	I1102 13:34:18.292403  285056 cli_runner.go:164] Run: docker network inspect old-k8s-version-054159
	W1102 13:34:18.310550  285056 cli_runner.go:211] docker network inspect old-k8s-version-054159 returned with exit code 1
	I1102 13:34:18.310590  285056 network_create.go:287] error running [docker network inspect old-k8s-version-054159]: docker network inspect old-k8s-version-054159: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-054159 not found
	I1102 13:34:18.310606  285056 network_create.go:289] output of [docker network inspect old-k8s-version-054159]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-054159 not found
	
	** /stderr **
	I1102 13:34:18.310741  285056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:34:18.329190  285056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
	I1102 13:34:18.330191  285056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6e64be95e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ec:8c:d9:e4:62} reservation:<nil>}
	I1102 13:34:18.331244  285056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce0c0e777855 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:03:0f:01:14:50} reservation:<nil>}
	I1102 13:34:18.332235  285056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f91940}
	I1102 13:34:18.332260  285056 network_create.go:124] attempt to create docker network old-k8s-version-054159 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 13:34:18.332312  285056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-054159 old-k8s-version-054159
	I1102 13:34:18.392347  285056 network_create.go:108] docker network old-k8s-version-054159 192.168.76.0/24 created
	I1102 13:34:18.392380  285056 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-054159" container
	I1102 13:34:18.392460  285056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:34:18.412760  285056 cli_runner.go:164] Run: docker volume create old-k8s-version-054159 --label name.minikube.sigs.k8s.io=old-k8s-version-054159 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:34:18.437707  285056 oci.go:103] Successfully created a docker volume old-k8s-version-054159
	I1102 13:34:18.437787  285056 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-054159-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-054159 --entrypoint /usr/bin/test -v old-k8s-version-054159:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:34:18.870662  285056 oci.go:107] Successfully prepared a docker volume old-k8s-version-054159
	I1102 13:34:18.870725  285056 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:34:18.870752  285056 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:34:18.870821  285056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-054159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:34:20.889533  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:21.389237  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:21.888990  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:22.389487  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:22.889322  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:23.388930  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:23.888661  272706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:24.069010  272706 kubeadm.go:1114] duration metric: took 4.266544055s to wait for elevateKubeSystemPrivileges
	I1102 13:34:24.069271  272706 kubeadm.go:403] duration metric: took 17.039533761s to StartCluster
	I1102 13:34:24.069331  272706 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:24.069414  272706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:34:24.071325  272706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:24.071644  272706 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:34:24.071678  272706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:34:24.071825  272706 config.go:182] Loaded profile config "flannel-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:34:24.071737  272706 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:34:24.071883  272706 addons.go:70] Setting default-storageclass=true in profile "flannel-123357"
	I1102 13:34:24.071877  272706 addons.go:70] Setting storage-provisioner=true in profile "flannel-123357"
	I1102 13:34:24.071901  272706 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-123357"
	I1102 13:34:24.071904  272706 addons.go:239] Setting addon storage-provisioner=true in "flannel-123357"
	I1102 13:34:24.071950  272706 host.go:66] Checking if "flannel-123357" exists ...
	I1102 13:34:24.072308  272706 cli_runner.go:164] Run: docker container inspect flannel-123357 --format={{.State.Status}}
	I1102 13:34:24.072493  272706 cli_runner.go:164] Run: docker container inspect flannel-123357 --format={{.State.Status}}
	I1102 13:34:24.074154  272706 out.go:179] * Verifying Kubernetes components...
	I1102 13:34:24.077750  272706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:34:24.105059  272706 addons.go:239] Setting addon default-storageclass=true in "flannel-123357"
	I1102 13:34:24.105131  272706 host.go:66] Checking if "flannel-123357" exists ...
	I1102 13:34:24.105650  272706 cli_runner.go:164] Run: docker container inspect flannel-123357 --format={{.State.Status}}
	I1102 13:34:24.107891  272706 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:34:24.109195  272706 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:24.109226  272706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:34:24.109285  272706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-123357
	I1102 13:34:24.150917  272706 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:24.152736  272706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:34:24.152951  272706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-123357
	I1102 13:34:24.178202  272706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/flannel-123357/id_rsa Username:docker}
	I1102 13:34:24.213291  272706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/flannel-123357/id_rsa Username:docker}
	I1102 13:34:24.329692  272706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:34:24.329885  272706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:34:24.355396  272706 node_ready.go:35] waiting up to 15m0s for node "flannel-123357" to be "Ready" ...
	I1102 13:34:24.355933  272706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:24.383968  272706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:24.642844  272706 start.go:1013] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1102 13:34:24.902220  272706 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:34:24.903250  272706 addons.go:515] duration metric: took 831.509567ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:34:25.149367  272706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-123357" context rescaled to 1 replicas
	I1102 13:34:22.236303  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:22.236773  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:22.236821  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:22.236866  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:22.264789  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:22.264814  231852 cri.go:89] found id: ""
	I1102 13:34:22.264823  231852 logs.go:282] 1 containers: [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:22.264881  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:22.268949  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:22.269002  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:22.295182  231852 cri.go:89] found id: ""
	I1102 13:34:22.295207  231852 logs.go:282] 0 containers: []
	W1102 13:34:22.295214  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:22.295219  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:22.295264  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:22.322768  231852 cri.go:89] found id: ""
	I1102 13:34:22.322796  231852 logs.go:282] 0 containers: []
	W1102 13:34:22.322805  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:22.322812  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:22.322868  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:22.349224  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:22.349250  231852 cri.go:89] found id: ""
	I1102 13:34:22.349259  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:22.349314  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:22.353249  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:22.353312  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:22.380231  231852 cri.go:89] found id: ""
	I1102 13:34:22.380268  231852 logs.go:282] 0 containers: []
	W1102 13:34:22.380278  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:22.380285  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:22.380348  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:22.409912  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:22.409934  231852 cri.go:89] found id: ""
	I1102 13:34:22.409943  231852 logs.go:282] 1 containers: [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:22.410004  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:22.414454  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:22.414518  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:22.445339  231852 cri.go:89] found id: ""
	I1102 13:34:22.445367  231852 logs.go:282] 0 containers: []
	W1102 13:34:22.445378  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:22.445393  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:22.445447  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:22.476256  231852 cri.go:89] found id: ""
	I1102 13:34:22.476279  231852 logs.go:282] 0 containers: []
	W1102 13:34:22.476288  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:22.476298  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:22.476309  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:22.491959  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:22.491985  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:34:22.545764  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:34:22.545786  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:22.545803  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:22.576843  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:22.576877  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:22.629116  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:22.629146  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:22.655349  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:22.655374  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:22.709840  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:22.709872  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:22.740494  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:22.740521  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:25.328927  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:25.329522  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:25.329605  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:25.329669  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:25.365902  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:25.365923  231852 cri.go:89] found id: ""
	I1102 13:34:25.365933  231852 logs.go:282] 1 containers: [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:25.365983  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:25.371049  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:25.371110  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:25.408178  231852 cri.go:89] found id: ""
	I1102 13:34:25.408204  231852 logs.go:282] 0 containers: []
	W1102 13:34:25.408215  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:25.408230  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:25.408288  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:25.447293  231852 cri.go:89] found id: ""
	I1102 13:34:25.447393  231852 logs.go:282] 0 containers: []
	W1102 13:34:25.447405  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:25.447413  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:25.447499  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:25.484672  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:25.484700  231852 cri.go:89] found id: ""
	I1102 13:34:25.484710  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:25.484770  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:25.489812  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:25.489911  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:25.523230  231852 cri.go:89] found id: ""
	I1102 13:34:25.523252  231852 logs.go:282] 0 containers: []
	W1102 13:34:25.523259  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:25.523314  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:25.523366  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:25.555536  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:25.555588  231852 cri.go:89] found id: ""
	I1102 13:34:25.555599  231852 logs.go:282] 1 containers: [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:25.555661  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:25.559934  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:25.560006  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:25.595373  231852 cri.go:89] found id: ""
	I1102 13:34:25.595403  231852 logs.go:282] 0 containers: []
	W1102 13:34:25.595412  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:25.595420  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:25.595465  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:25.630425  231852 cri.go:89] found id: ""
	I1102 13:34:25.630453  231852 logs.go:282] 0 containers: []
	W1102 13:34:25.630463  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:25.630474  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:25.630491  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:25.704030  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:25.704060  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:25.741703  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:25.741729  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:25.821865  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:25.821899  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:25.862320  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:25.862353  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:26.013235  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:26.013274  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:26.046853  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:26.046943  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:34:26.165824  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:34:26.166025  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:26.166065  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:24.086785  285056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-054159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.215914631s)
	I1102 13:34:24.086824  285056 kic.go:203] duration metric: took 5.216069097s to extract preloaded images to volume ...
	W1102 13:34:24.086931  285056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 13:34:24.087002  285056 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 13:34:24.087052  285056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:34:24.247591  285056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-054159 --name old-k8s-version-054159 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-054159 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-054159 --network old-k8s-version-054159 --ip 192.168.76.2 --volume old-k8s-version-054159:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 13:34:24.659751  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Running}}
	I1102 13:34:24.686545  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:24.716858  285056 cli_runner.go:164] Run: docker exec old-k8s-version-054159 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:34:24.781609  285056 oci.go:144] the created container "old-k8s-version-054159" has a running status.
	I1102 13:34:24.781655  285056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa...
	I1102 13:34:25.356525  285056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:34:25.386529  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:25.409954  285056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:34:25.409976  285056 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-054159 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:34:25.465845  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:25.488597  285056 machine.go:94] provisionDockerMachine start ...
	I1102 13:34:25.488693  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:25.510782  285056 main.go:143] libmachine: Using SSH client type: native
	I1102 13:34:25.511037  285056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1102 13:34:25.511050  285056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:34:25.668528  285056 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-054159
	
	I1102 13:34:25.668557  285056 ubuntu.go:182] provisioning hostname "old-k8s-version-054159"
	I1102 13:34:25.668639  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:25.696997  285056 main.go:143] libmachine: Using SSH client type: native
	I1102 13:34:25.697324  285056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1102 13:34:25.697347  285056 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-054159 && echo "old-k8s-version-054159" | sudo tee /etc/hostname
	I1102 13:34:25.869625  285056 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-054159
	
	I1102 13:34:25.869719  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:25.900901  285056 main.go:143] libmachine: Using SSH client type: native
	I1102 13:34:25.901285  285056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1102 13:34:25.901322  285056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-054159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-054159/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-054159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:34:26.079101  285056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:34:26.079298  285056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:34:26.079374  285056 ubuntu.go:190] setting up certificates
	I1102 13:34:26.079397  285056 provision.go:84] configureAuth start
	I1102 13:34:26.079540  285056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-054159
	I1102 13:34:26.105858  285056 provision.go:143] copyHostCerts
	I1102 13:34:26.106229  285056 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:34:26.106277  285056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:34:26.106445  285056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:34:26.106664  285056 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:34:26.106680  285056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:34:26.106777  285056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:34:26.106930  285056 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:34:26.106964  285056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:34:26.107048  285056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:34:26.107242  285056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-054159 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-054159]
	I1102 13:34:26.463150  285056 provision.go:177] copyRemoteCerts
	I1102 13:34:26.463236  285056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:34:26.463282  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:26.487332  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:26.599113  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:34:26.623070  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1102 13:34:26.644824  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 13:34:26.667402  285056 provision.go:87] duration metric: took 587.92946ms to configureAuth
	I1102 13:34:26.667435  285056 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:34:26.667686  285056 config.go:182] Loaded profile config "old-k8s-version-054159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 13:34:26.667841  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:26.693159  285056 main.go:143] libmachine: Using SSH client type: native
	I1102 13:34:26.693453  285056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1102 13:34:26.693480  285056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:34:26.995958  285056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:34:26.995987  285056 machine.go:97] duration metric: took 1.507366101s to provisionDockerMachine
	I1102 13:34:26.996000  285056 client.go:176] duration metric: took 8.723561781s to LocalClient.Create
	I1102 13:34:26.996021  285056 start.go:167] duration metric: took 8.723620026s to libmachine.API.Create "old-k8s-version-054159"
	I1102 13:34:26.996030  285056 start.go:293] postStartSetup for "old-k8s-version-054159" (driver="docker")
	I1102 13:34:26.996042  285056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:34:26.996136  285056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:34:26.996206  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:27.018781  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:27.128126  285056 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:34:27.132407  285056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:34:27.132448  285056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:34:27.132461  285056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:34:27.132514  285056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:34:27.132653  285056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:34:27.132773  285056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:34:27.141742  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:34:27.164265  285056 start.go:296] duration metric: took 168.219184ms for postStartSetup
	I1102 13:34:27.164704  285056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-054159
	I1102 13:34:27.185762  285056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/config.json ...
	I1102 13:34:27.186006  285056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:34:27.186071  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:27.204650  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:27.310730  285056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:34:27.315069  285056 start.go:128] duration metric: took 9.044860121s to createHost
	I1102 13:34:27.315094  285056 start.go:83] releasing machines lock for "old-k8s-version-054159", held for 9.045016399s
	I1102 13:34:27.315165  285056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-054159
	I1102 13:34:27.332860  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:34:27.332923  285056 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:34:27.332940  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:34:27.332971  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:34:27.333013  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:34:27.333045  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:34:27.333101  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:34:27.333191  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:34:27.333247  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:27.350675  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:27.465702  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:34:27.484797  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:34:27.501771  285056 ssh_runner.go:195] Run: openssl version
	I1102 13:34:27.507724  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:34:27.520759  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:34:27.526040  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:34:27.526103  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:34:27.577497  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:34:27.587552  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:34:27.597456  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:34:27.601311  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:34:27.601357  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:34:27.646765  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:34:27.660155  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:34:27.673124  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:27.678827  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:27.678888  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:27.724105  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:34:27.735190  285056 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:34:27.739019  285056 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:34:27.742858  285056 ssh_runner.go:195] Run: cat /version.json
	I1102 13:34:27.742933  285056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:34:27.746456  285056 ssh_runner.go:195] Run: systemctl --version
	I1102 13:34:27.810112  285056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:34:27.859420  285056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:34:27.865942  285056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:34:27.866017  285056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:34:27.899325  285056 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 13:34:27.899344  285056 start.go:496] detecting cgroup driver to use...
	I1102 13:34:27.899384  285056 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:34:27.899428  285056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:34:27.920079  285056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:34:27.935107  285056 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:34:27.935177  285056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:34:27.955868  285056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:34:27.984613  285056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:34:28.072810  285056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:34:28.159760  285056 docker.go:234] disabling docker service ...
	I1102 13:34:28.159812  285056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:34:28.178233  285056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:34:28.190716  285056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:34:28.277143  285056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:34:28.361190  285056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:34:28.373761  285056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:34:28.388516  285056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1102 13:34:28.388595  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.398986  285056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:34:28.399041  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.407729  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.416149  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.424699  285056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:34:28.432493  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.441027  285056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.454075  285056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:34:28.462585  285056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:34:28.470248  285056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:34:28.477861  285056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:34:28.554112  285056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:34:28.885554  285056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:34:28.885681  285056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:34:28.890832  285056 start.go:564] Will wait 60s for crictl version
	I1102 13:34:28.890987  285056 ssh_runner.go:195] Run: which crictl
	I1102 13:34:28.895396  285056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:34:28.925238  285056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:34:28.925334  285056 ssh_runner.go:195] Run: crio --version
	I1102 13:34:28.957532  285056 ssh_runner.go:195] Run: crio --version
	I1102 13:34:28.992319  285056 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1102 13:34:26.359220  272706 node_ready.go:57] node "flannel-123357" has "Ready":"False" status (will retry)
	I1102 13:34:27.858915  272706 node_ready.go:49] node "flannel-123357" is "Ready"
	I1102 13:34:27.858953  272706 node_ready.go:38] duration metric: took 3.503509929s for node "flannel-123357" to be "Ready" ...
	I1102 13:34:27.858982  272706 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:34:27.859037  272706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:34:27.874530  272706 api_server.go:72] duration metric: took 3.80284734s to wait for apiserver process to appear ...
	I1102 13:34:27.874552  272706 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:34:27.874600  272706 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:34:27.880975  272706 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1102 13:34:27.882208  272706 api_server.go:141] control plane version: v1.34.1
	I1102 13:34:27.882233  272706 api_server.go:131] duration metric: took 7.673777ms to wait for apiserver health ...
	I1102 13:34:27.882245  272706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:34:27.886383  272706 system_pods.go:59] 7 kube-system pods found
	I1102 13:34:27.886417  272706 system_pods.go:61] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:27.886433  272706 system_pods.go:61] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:27.886446  272706 system_pods.go:61] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:27.886453  272706 system_pods.go:61] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:27.886462  272706 system_pods.go:61] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:27.886468  272706 system_pods.go:61] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:27.886478  272706 system_pods.go:61] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:27.886485  272706 system_pods.go:74] duration metric: took 4.233365ms to wait for pod list to return data ...
	I1102 13:34:27.886498  272706 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:34:27.889938  272706 default_sa.go:45] found service account: "default"
	I1102 13:34:27.889961  272706 default_sa.go:55] duration metric: took 3.456706ms for default service account to be created ...
	I1102 13:34:27.889972  272706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:34:27.893364  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:27.893391  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:27.893397  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:27.893405  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:27.893411  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:27.893416  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:27.893421  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:27.893431  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:27.893460  272706 retry.go:31] will retry after 250.071049ms: missing components: kube-dns
	I1102 13:34:28.147006  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:28.147038  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:28.147045  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:28.147051  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:28.147055  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:28.147058  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:28.147061  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:28.147065  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:28.147084  272706 retry.go:31] will retry after 376.179395ms: missing components: kube-dns
	I1102 13:34:28.526759  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:28.526790  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:28.526798  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:28.526804  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:28.526809  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:28.526812  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:28.526816  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:28.526823  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:28.526838  272706 retry.go:31] will retry after 307.876241ms: missing components: kube-dns
	I1102 13:34:28.839292  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:28.839343  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:28.839353  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:28.839362  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:28.839370  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:28.839375  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:28.839380  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:28.839385  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:28.839404  272706 retry.go:31] will retry after 485.150795ms: missing components: kube-dns
	I1102 13:34:29.329614  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:29.329658  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:29.329665  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:29.329673  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:29.329682  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:29.329687  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:29.329692  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:29.329697  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:29.329713  272706 retry.go:31] will retry after 654.0296ms: missing components: kube-dns
	I1102 13:34:29.991055  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:29.991093  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:29.991101  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:29.991109  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:29.991116  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:29.991123  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:29.991127  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:29.991133  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:29.991150  272706 retry.go:31] will retry after 822.502206ms: missing components: kube-dns
	I1102 13:34:30.880709  278482 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:34:30.880796  278482 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:34:30.880910  278482 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:34:30.880970  278482 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:34:30.881002  278482 kubeadm.go:319] OS: Linux
	I1102 13:34:30.881046  278482 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:34:30.881087  278482 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:34:30.881145  278482 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:34:30.881187  278482 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:34:30.881228  278482 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:34:30.881268  278482 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:34:30.881309  278482 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:34:30.881372  278482 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:34:30.881470  278482 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:34:30.881624  278482 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:34:30.881749  278482 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:34:30.881838  278482 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:34:30.884108  278482 out.go:252]   - Generating certificates and keys ...
	I1102 13:34:30.884183  278482 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:34:30.884250  278482 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:34:30.884340  278482 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:34:30.884426  278482 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:34:30.884519  278482 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:34:30.884622  278482 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:34:30.884710  278482 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:34:30.884888  278482 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-123357 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:34:30.884972  278482 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:34:30.885142  278482 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-123357 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:34:30.885202  278482 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:34:30.885253  278482 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:34:30.885294  278482 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:34:30.885338  278482 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:34:30.885414  278482 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:34:30.885498  278482 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:34:30.885545  278482 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:34:30.885622  278482 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:34:30.885672  278482 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:34:30.885736  278482 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:34:30.885788  278482 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:34:30.886833  278482 out.go:252]   - Booting up control plane ...
	I1102 13:34:30.886901  278482 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:34:30.886962  278482 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:34:30.887042  278482 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:34:30.887220  278482 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:34:30.887321  278482 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:34:30.887461  278482 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:34:30.887591  278482 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:34:30.887654  278482 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:34:30.887844  278482 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:34:30.887934  278482 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:34:30.887985  278482 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001931831s
	I1102 13:34:30.888058  278482 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:34:30.888141  278482 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1102 13:34:30.888266  278482 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:34:30.888366  278482 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:34:30.888484  278482 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.21802108s
	I1102 13:34:30.888596  278482 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.225249359s
	I1102 13:34:30.888695  278482 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.0021466s
	I1102 13:34:30.888845  278482 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:34:30.889025  278482 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:34:30.889121  278482 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:34:30.889299  278482 kubeadm.go:319] [mark-control-plane] Marking the node bridge-123357 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:34:30.889371  278482 kubeadm.go:319] [bootstrap-token] Using token: dx33r6.ii6z42iucfwm6k85
	I1102 13:34:30.890646  278482 out.go:252]   - Configuring RBAC rules ...
	I1102 13:34:30.890756  278482 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:34:30.890825  278482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:34:30.891014  278482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:34:30.891206  278482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:34:30.891383  278482 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:34:30.891496  278482 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:34:30.891662  278482 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:34:30.891726  278482 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:34:30.891792  278482 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:34:30.891799  278482 kubeadm.go:319] 
	I1102 13:34:30.891886  278482 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:34:30.891901  278482 kubeadm.go:319] 
	I1102 13:34:30.891988  278482 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:34:30.892000  278482 kubeadm.go:319] 
	I1102 13:34:30.892037  278482 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:34:30.892124  278482 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:34:30.892176  278482 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:34:30.892188  278482 kubeadm.go:319] 
	I1102 13:34:30.892289  278482 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:34:30.892301  278482 kubeadm.go:319] 
	I1102 13:34:30.892364  278482 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:34:30.892374  278482 kubeadm.go:319] 
	I1102 13:34:30.892443  278482 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:34:30.892543  278482 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:34:30.892669  278482 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:34:30.892682  278482 kubeadm.go:319] 
	I1102 13:34:30.892794  278482 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:34:30.892900  278482 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:34:30.892906  278482 kubeadm.go:319] 
	I1102 13:34:30.892970  278482 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dx33r6.ii6z42iucfwm6k85 \
	I1102 13:34:30.893085  278482 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:34:30.893125  278482 kubeadm.go:319] 	--control-plane 
	I1102 13:34:30.893135  278482 kubeadm.go:319] 
	I1102 13:34:30.893251  278482 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:34:30.893260  278482 kubeadm.go:319] 
	I1102 13:34:30.893390  278482 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dx33r6.ii6z42iucfwm6k85 \
	I1102 13:34:30.893517  278482 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:34:30.893531  278482 cni.go:84] Creating CNI manager for "bridge"
	I1102 13:34:30.894867  278482 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1102 13:34:28.722317  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:28.722733  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:28.722784  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:28.722834  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:28.751516  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:28.751542  231852 cri.go:89] found id: ""
	I1102 13:34:28.751552  231852 logs.go:282] 1 containers: [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:28.751626  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:28.755656  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:28.755713  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:28.793754  231852 cri.go:89] found id: ""
	I1102 13:34:28.793784  231852 logs.go:282] 0 containers: []
	W1102 13:34:28.793796  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:28.793804  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:28.793860  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:28.827900  231852 cri.go:89] found id: ""
	I1102 13:34:28.827932  231852 logs.go:282] 0 containers: []
	W1102 13:34:28.827943  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:28.827951  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:28.828013  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:28.863821  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:28.863856  231852 cri.go:89] found id: ""
	I1102 13:34:28.863866  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:28.863925  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:28.869713  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:28.869792  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:28.909619  231852 cri.go:89] found id: ""
	I1102 13:34:28.909648  231852 logs.go:282] 0 containers: []
	W1102 13:34:28.909659  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:28.909666  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:28.909727  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:28.942952  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:28.943038  231852 cri.go:89] found id: ""
	I1102 13:34:28.943060  231852 logs.go:282] 1 containers: [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:28.943118  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:28.948088  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:28.948156  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:28.981915  231852 cri.go:89] found id: ""
	I1102 13:34:28.981941  231852 logs.go:282] 0 containers: []
	W1102 13:34:28.981951  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:28.981959  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:28.982017  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:29.016636  231852 cri.go:89] found id: ""
	I1102 13:34:29.016663  231852 logs.go:282] 0 containers: []
	W1102 13:34:29.016673  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:29.016685  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:29.016699  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:29.054217  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:29.054252  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:29.171653  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:29.171685  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:29.189532  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:29.189560  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:34:29.261115  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:34:29.261139  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:29.261154  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:29.305732  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:29.305764  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:29.371351  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:29.371389  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:29.400196  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:29.400229  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:28.997141  285056 cli_runner.go:164] Run: docker network inspect old-k8s-version-054159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:34:29.019781  285056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:34:29.024714  285056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:34:29.037979  285056 kubeadm.go:884] updating cluster {Name:old-k8s-version-054159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-054159 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:34:29.038162  285056 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:34:29.038236  285056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:34:29.078013  285056 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:34:29.078038  285056 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:34:29.078093  285056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:34:29.111586  285056 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:34:29.111611  285056 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:34:29.111620  285056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1102 13:34:29.111730  285056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-054159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-054159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:34:29.111814  285056 ssh_runner.go:195] Run: crio config
	I1102 13:34:29.166551  285056 cni.go:84] Creating CNI manager for ""
	I1102 13:34:29.166591  285056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:34:29.166605  285056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:34:29.166634  285056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-054159 NodeName:old-k8s-version-054159 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:34:29.166785  285056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-054159"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:34:29.166850  285056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1102 13:34:29.176748  285056 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:34:29.176831  285056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:34:29.186615  285056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1102 13:34:29.202529  285056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:34:29.222011  285056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1102 13:34:29.238777  285056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:34:29.243685  285056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:34:29.256228  285056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:34:29.359101  285056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:34:29.384216  285056 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159 for IP: 192.168.76.2
	I1102 13:34:29.384242  285056 certs.go:195] generating shared ca certs ...
	I1102 13:34:29.384264  285056 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.384412  285056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:34:29.384450  285056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:34:29.384459  285056 certs.go:257] generating profile certs ...
	I1102 13:34:29.384506  285056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.key
	I1102 13:34:29.384518  285056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.crt with IP's: []
	I1102 13:34:29.540185  285056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.crt ...
	I1102 13:34:29.540217  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.crt: {Name:mk49f1d6013b12a5be191712db07f65c3a9501b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.540374  285056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.key ...
	I1102 13:34:29.540389  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/client.key: {Name:mkb7a06f1dc037a9ce3cdd61ad7407732d716b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.540504  285056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key.76bebbe6
	I1102 13:34:29.540521  285056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt.76bebbe6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 13:34:29.650654  285056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt.76bebbe6 ...
	I1102 13:34:29.650682  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt.76bebbe6: {Name:mk718c5c56a8f10d168a3636eda0e6013e8116f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.650849  285056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key.76bebbe6 ...
	I1102 13:34:29.650863  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key.76bebbe6: {Name:mka7b5fb46c17d8432f058a3a077c16b0be8f6fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.650933  285056 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt.76bebbe6 -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt
	I1102 13:34:29.651006  285056 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key.76bebbe6 -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key
	I1102 13:34:29.651096  285056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.key
	I1102 13:34:29.651121  285056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.crt with IP's: []
	I1102 13:34:29.805247  285056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.crt ...
	I1102 13:34:29.805289  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.crt: {Name:mk94f37c00cf35684c5448783326537389e0541b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.805468  285056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.key ...
	I1102 13:34:29.805488  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.key: {Name:mk1d22f08595b3e12bccac81d274794b6544f2d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:29.805732  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:34:29.805787  285056 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:34:29.805800  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:34:29.805834  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:34:29.805872  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:34:29.805906  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:34:29.805963  285056 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:34:29.806550  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:34:29.825446  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:34:29.842659  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:34:29.859729  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:34:29.878470  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1102 13:34:29.897058  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:34:29.918116  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:34:29.939064  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/old-k8s-version-054159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:34:29.956506  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:34:29.974374  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:34:29.993866  285056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:34:30.012249  285056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:34:30.026043  285056 ssh_runner.go:195] Run: openssl version
	I1102 13:34:30.032983  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:34:30.042441  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:30.046472  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:30.046532  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:34:30.092139  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:34:30.104691  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:34:30.116890  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:34:30.122497  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:34:30.122560  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:34:30.159949  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:34:30.168766  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:34:30.177263  285056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:34:30.180977  285056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:34:30.181033  285056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:34:30.219653  285056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:34:30.231207  285056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:34:30.236679  285056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:34:30.236742  285056 kubeadm.go:401] StartCluster: {Name:old-k8s-version-054159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-054159 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:34:30.236821  285056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:34:30.236872  285056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:34:30.266923  285056 cri.go:89] found id: ""
	I1102 13:34:30.266997  285056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:34:30.276556  285056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:34:30.285606  285056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:34:30.285670  285056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:34:30.294164  285056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:34:30.294189  285056 kubeadm.go:158] found existing configuration files:
	
	I1102 13:34:30.294239  285056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 13:34:30.302968  285056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:34:30.303040  285056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:34:30.310774  285056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 13:34:30.318774  285056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:34:30.318832  285056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:34:30.325951  285056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 13:34:30.333369  285056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:34:30.333428  285056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:34:30.340794  285056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 13:34:30.348289  285056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:34:30.348357  285056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:34:30.355879  285056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:34:30.435101  285056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:34:30.506378  285056 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:34:30.896049  278482 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1102 13:34:30.904681  278482 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1102 13:34:30.918437  278482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:34:30.918577  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-123357 minikube.k8s.io/updated_at=2025_11_02T13_34_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=bridge-123357 minikube.k8s.io/primary=true
	I1102 13:34:30.918595  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:30.930452  278482 ops.go:34] apiserver oom_adj: -16
	I1102 13:34:31.007971  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:31.508637  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:32.008631  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:32.508784  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:33.008272  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:33.508613  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:34.008847  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:34.508839  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:35.008127  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:30.817155  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:30.817192  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:30.817198  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:30.817203  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:30.817206  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:30.817210  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:30.817214  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:30.817216  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:30.817230  272706 retry.go:31] will retry after 1.16826571s: missing components: kube-dns
	I1102 13:34:31.990591  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:31.990629  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:31.990638  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:31.990646  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:31.990652  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:31.990657  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:31.990662  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:31.990667  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:31.990684  272706 retry.go:31] will retry after 1.002969656s: missing components: kube-dns
	I1102 13:34:32.997371  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:32.997403  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:32.997408  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:32.997414  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:32.997417  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:32.997421  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:32.997424  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:32.997427  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:32.997439  272706 retry.go:31] will retry after 1.230396802s: missing components: kube-dns
	I1102 13:34:34.232058  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:34.232106  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:34.232115  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:34.232122  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:34.232127  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:34.232134  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:34.232139  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:34.232144  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:34.232161  272706 retry.go:31] will retry after 2.098559142s: missing components: kube-dns
	I1102 13:34:35.509032  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:36.008674  278482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:36.087346  278482 kubeadm.go:1114] duration metric: took 5.168839816s to wait for elevateKubeSystemPrivileges
	I1102 13:34:36.087387  278482 kubeadm.go:403] duration metric: took 19.739455159s to StartCluster
	I1102 13:34:36.087409  278482 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:36.087488  278482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:34:36.089096  278482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:36.089360  278482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:34:36.089370  278482 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:34:36.089456  278482 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:34:36.089573  278482 addons.go:70] Setting storage-provisioner=true in profile "bridge-123357"
	I1102 13:34:36.089588  278482 addons.go:70] Setting default-storageclass=true in profile "bridge-123357"
	I1102 13:34:36.089598  278482 addons.go:239] Setting addon storage-provisioner=true in "bridge-123357"
	I1102 13:34:36.089614  278482 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-123357"
	I1102 13:34:36.089633  278482 host.go:66] Checking if "bridge-123357" exists ...
	I1102 13:34:36.089588  278482 config.go:182] Loaded profile config "bridge-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:34:36.089976  278482 cli_runner.go:164] Run: docker container inspect bridge-123357 --format={{.State.Status}}
	I1102 13:34:36.090139  278482 cli_runner.go:164] Run: docker container inspect bridge-123357 --format={{.State.Status}}
	I1102 13:34:36.091755  278482 out.go:179] * Verifying Kubernetes components...
	I1102 13:34:36.093026  278482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:34:36.112970  278482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:34:36.113763  278482 addons.go:239] Setting addon default-storageclass=true in "bridge-123357"
	I1102 13:34:36.113811  278482 host.go:66] Checking if "bridge-123357" exists ...
	I1102 13:34:36.114146  278482 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:36.114165  278482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:34:36.114219  278482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-123357
	I1102 13:34:36.114299  278482 cli_runner.go:164] Run: docker container inspect bridge-123357 --format={{.State.Status}}
	I1102 13:34:36.142192  278482 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:36.142220  278482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:34:36.142282  278482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-123357
	I1102 13:34:36.145229  278482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/bridge-123357/id_rsa Username:docker}
	I1102 13:34:36.168433  278482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/bridge-123357/id_rsa Username:docker}
	I1102 13:34:36.190160  278482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:34:36.254057  278482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:34:36.270232  278482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:36.293069  278482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:36.426469  278482 start.go:1013] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1102 13:34:36.427662  278482 node_ready.go:35] waiting up to 15m0s for node "bridge-123357" to be "Ready" ...
	I1102 13:34:36.437688  278482 node_ready.go:49] node "bridge-123357" is "Ready"
	I1102 13:34:36.437718  278482 node_ready.go:38] duration metric: took 10.026595ms for node "bridge-123357" to be "Ready" ...
	I1102 13:34:36.437732  278482 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:34:36.437781  278482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:34:36.639064  278482 api_server.go:72] duration metric: took 549.662603ms to wait for apiserver process to appear ...
	I1102 13:34:36.639093  278482 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:34:36.639116  278482 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1102 13:34:36.644833  278482 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1102 13:34:36.645704  278482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:34:31.966956  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:31.967425  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:31.967474  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:31.967526  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:31.998185  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:31.998217  231852 cri.go:89] found id: ""
	I1102 13:34:31.998225  231852 logs.go:282] 1 containers: [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:31.998285  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:32.002532  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:32.002607  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:32.031175  231852 cri.go:89] found id: ""
	I1102 13:34:32.031206  231852 logs.go:282] 0 containers: []
	W1102 13:34:32.031216  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:32.031225  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:32.031286  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:32.064526  231852 cri.go:89] found id: ""
	I1102 13:34:32.064553  231852 logs.go:282] 0 containers: []
	W1102 13:34:32.064600  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:32.064608  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:32.064666  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:32.095470  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:32.095493  231852 cri.go:89] found id: ""
	I1102 13:34:32.095503  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:32.095587  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:32.099801  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:32.099851  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:32.126225  231852 cri.go:89] found id: ""
	I1102 13:34:32.126249  231852 logs.go:282] 0 containers: []
	W1102 13:34:32.126259  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:32.126267  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:32.126318  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:32.153408  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:32.153435  231852 cri.go:89] found id: ""
	I1102 13:34:32.153446  231852 logs.go:282] 1 containers: [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:32.153505  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:32.157655  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:32.157722  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:32.184601  231852 cri.go:89] found id: ""
	I1102 13:34:32.184626  231852 logs.go:282] 0 containers: []
	W1102 13:34:32.184636  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:32.184644  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:32.184700  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:32.212335  231852 cri.go:89] found id: ""
	I1102 13:34:32.212365  231852 logs.go:282] 0 containers: []
	W1102 13:34:32.212376  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:32.212388  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:32.212402  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:32.249709  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:32.249749  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:32.313534  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:32.313583  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:32.341296  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:32.341332  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:32.396074  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:32.396106  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:32.428591  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:32.428619  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:32.530854  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:32.530891  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:32.549247  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:32.549277  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1102 13:34:38.620028  285056 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1102 13:34:38.620116  285056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:34:38.620219  285056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:34:38.620314  285056 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:34:38.620371  285056 kubeadm.go:319] OS: Linux
	I1102 13:34:38.620440  285056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:34:38.620511  285056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:34:38.620615  285056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:34:38.620692  285056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:34:38.620771  285056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:34:38.620841  285056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:34:38.620917  285056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:34:38.620988  285056 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:34:38.621071  285056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:34:38.621230  285056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:34:38.621315  285056 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1102 13:34:38.621397  285056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:34:38.623679  285056 out.go:252]   - Generating certificates and keys ...
	I1102 13:34:38.623762  285056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:34:38.623848  285056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:34:38.623911  285056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:34:38.623972  285056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:34:38.624036  285056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:34:38.624082  285056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:34:38.624128  285056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:34:38.624306  285056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-054159] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 13:34:38.624367  285056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:34:38.624486  285056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-054159] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 13:34:38.624596  285056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:34:38.624674  285056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:34:38.624714  285056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:34:38.624759  285056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:34:38.624801  285056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:34:38.624850  285056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:34:38.624907  285056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:34:38.624968  285056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:34:38.625087  285056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:34:38.625186  285056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:34:38.626491  285056 out.go:252]   - Booting up control plane ...
	I1102 13:34:38.626601  285056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:34:38.626692  285056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:34:38.626786  285056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:34:38.626920  285056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:34:38.627014  285056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:34:38.627081  285056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:34:38.627251  285056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1102 13:34:38.627368  285056 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002301 seconds
	I1102 13:34:38.627463  285056 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:34:38.627635  285056 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:34:38.627696  285056 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:34:38.627863  285056 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-054159 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:34:38.627912  285056 kubeadm.go:319] [bootstrap-token] Using token: 0vsh8m.a5xhk4tv6n466edr
	I1102 13:34:38.629242  285056 out.go:252]   - Configuring RBAC rules ...
	I1102 13:34:38.629348  285056 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:34:38.629439  285056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:34:38.629627  285056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:34:38.629804  285056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:34:38.629978  285056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:34:38.630091  285056 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:34:38.630203  285056 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:34:38.630241  285056 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:34:38.630281  285056 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:34:38.630287  285056 kubeadm.go:319] 
	I1102 13:34:38.630339  285056 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:34:38.630345  285056 kubeadm.go:319] 
	I1102 13:34:38.630408  285056 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:34:38.630414  285056 kubeadm.go:319] 
	I1102 13:34:38.630455  285056 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:34:38.630531  285056 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:34:38.630596  285056 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:34:38.630605  285056 kubeadm.go:319] 
	I1102 13:34:38.630674  285056 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:34:38.630691  285056 kubeadm.go:319] 
	I1102 13:34:38.630763  285056 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:34:38.630776  285056 kubeadm.go:319] 
	I1102 13:34:38.630854  285056 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:34:38.630968  285056 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:34:38.631071  285056 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:34:38.631082  285056 kubeadm.go:319] 
	I1102 13:34:38.631187  285056 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:34:38.631279  285056 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:34:38.631290  285056 kubeadm.go:319] 
	I1102 13:34:38.631389  285056 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0vsh8m.a5xhk4tv6n466edr \
	I1102 13:34:38.631560  285056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:34:38.631703  285056 kubeadm.go:319] 	--control-plane 
	I1102 13:34:38.631720  285056 kubeadm.go:319] 
	I1102 13:34:38.631847  285056 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:34:38.631862  285056 kubeadm.go:319] 
	I1102 13:34:38.631982  285056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0vsh8m.a5xhk4tv6n466edr \
	I1102 13:34:38.632132  285056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:34:38.632149  285056 cni.go:84] Creating CNI manager for ""
	I1102 13:34:38.632157  285056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:34:38.633707  285056 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:34:36.645812  278482 api_server.go:141] control plane version: v1.34.1
	I1102 13:34:36.645847  278482 api_server.go:131] duration metric: took 6.745362ms to wait for apiserver health ...
	I1102 13:34:36.645862  278482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:34:36.647152  278482 addons.go:515] duration metric: took 557.696444ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:34:36.649484  278482 system_pods.go:59] 8 kube-system pods found
	I1102 13:34:36.649528  278482 system_pods.go:61] "coredns-66bc5c9577-64bzq" [ead7baf2-5b2f-4b43-9fea-a78be87bd96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.649540  278482 system_pods.go:61] "coredns-66bc5c9577-kfmcz" [c53bd254-a165-4fb9-b820-d888054e89bb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.649551  278482 system_pods.go:61] "etcd-bridge-123357" [a29e3101-0526-4ded-8e6b-c02e979ead14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:36.649578  278482 system_pods.go:61] "kube-apiserver-bridge-123357" [2acb868b-a12d-427a-9958-e5b6c47675a2] Running
	I1102 13:34:36.649588  278482 system_pods.go:61] "kube-controller-manager-bridge-123357" [d64af27a-4806-4372-ae44-4ea9588bcc30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:34:36.649593  278482 system_pods.go:61] "kube-proxy-hhcgf" [1bd12ea2-dab8-43c2-b6be-23401efb2122] Running
	I1102 13:34:36.649602  278482 system_pods.go:61] "kube-scheduler-bridge-123357" [a5e91b26-feee-478a-bbeb-6f050f4fd173] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:34:36.649616  278482 system_pods.go:61] "storage-provisioner" [0c87e34a-6996-4a83-b541-24e6166e2b81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:36.649627  278482 system_pods.go:74] duration metric: took 3.757643ms to wait for pod list to return data ...
	I1102 13:34:36.649637  278482 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:34:36.651786  278482 default_sa.go:45] found service account: "default"
	I1102 13:34:36.651802  278482 default_sa.go:55] duration metric: took 2.159437ms for default service account to be created ...
	I1102 13:34:36.651808  278482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:34:36.654526  278482 system_pods.go:86] 8 kube-system pods found
	I1102 13:34:36.654551  278482 system_pods.go:89] "coredns-66bc5c9577-64bzq" [ead7baf2-5b2f-4b43-9fea-a78be87bd96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.654557  278482 system_pods.go:89] "coredns-66bc5c9577-kfmcz" [c53bd254-a165-4fb9-b820-d888054e89bb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.654607  278482 system_pods.go:89] "etcd-bridge-123357" [a29e3101-0526-4ded-8e6b-c02e979ead14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:36.654620  278482 system_pods.go:89] "kube-apiserver-bridge-123357" [2acb868b-a12d-427a-9958-e5b6c47675a2] Running
	I1102 13:34:36.654633  278482 system_pods.go:89] "kube-controller-manager-bridge-123357" [d64af27a-4806-4372-ae44-4ea9588bcc30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:34:36.654638  278482 system_pods.go:89] "kube-proxy-hhcgf" [1bd12ea2-dab8-43c2-b6be-23401efb2122] Running
	I1102 13:34:36.654649  278482 system_pods.go:89] "kube-scheduler-bridge-123357" [a5e91b26-feee-478a-bbeb-6f050f4fd173] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:34:36.654659  278482 system_pods.go:89] "storage-provisioner" [0c87e34a-6996-4a83-b541-24e6166e2b81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:36.654680  278482 retry.go:31] will retry after 202.161213ms: missing components: kube-dns
	I1102 13:34:36.860554  278482 system_pods.go:86] 8 kube-system pods found
	I1102 13:34:36.860605  278482 system_pods.go:89] "coredns-66bc5c9577-64bzq" [ead7baf2-5b2f-4b43-9fea-a78be87bd96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.860621  278482 system_pods.go:89] "coredns-66bc5c9577-kfmcz" [c53bd254-a165-4fb9-b820-d888054e89bb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.860631  278482 system_pods.go:89] "etcd-bridge-123357" [a29e3101-0526-4ded-8e6b-c02e979ead14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:36.860639  278482 system_pods.go:89] "kube-apiserver-bridge-123357" [2acb868b-a12d-427a-9958-e5b6c47675a2] Running
	I1102 13:34:36.860647  278482 system_pods.go:89] "kube-controller-manager-bridge-123357" [d64af27a-4806-4372-ae44-4ea9588bcc30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:34:36.860656  278482 system_pods.go:89] "kube-proxy-hhcgf" [1bd12ea2-dab8-43c2-b6be-23401efb2122] Running
	I1102 13:34:36.860664  278482 system_pods.go:89] "kube-scheduler-bridge-123357" [a5e91b26-feee-478a-bbeb-6f050f4fd173] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:34:36.860674  278482 system_pods.go:89] "storage-provisioner" [0c87e34a-6996-4a83-b541-24e6166e2b81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:34:36.860690  278482 retry.go:31] will retry after 340.116339ms: missing components: kube-dns
	I1102 13:34:36.930900  278482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-123357" context rescaled to 1 replicas
	I1102 13:34:37.204878  278482 system_pods.go:86] 8 kube-system pods found
	I1102 13:34:37.204911  278482 system_pods.go:89] "coredns-66bc5c9577-64bzq" [ead7baf2-5b2f-4b43-9fea-a78be87bd96c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:37.204923  278482 system_pods.go:89] "coredns-66bc5c9577-kfmcz" [c53bd254-a165-4fb9-b820-d888054e89bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:37.204930  278482 system_pods.go:89] "etcd-bridge-123357" [a29e3101-0526-4ded-8e6b-c02e979ead14] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:34:37.204934  278482 system_pods.go:89] "kube-apiserver-bridge-123357" [2acb868b-a12d-427a-9958-e5b6c47675a2] Running
	I1102 13:34:37.204941  278482 system_pods.go:89] "kube-controller-manager-bridge-123357" [d64af27a-4806-4372-ae44-4ea9588bcc30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:34:37.204944  278482 system_pods.go:89] "kube-proxy-hhcgf" [1bd12ea2-dab8-43c2-b6be-23401efb2122] Running
	I1102 13:34:37.204949  278482 system_pods.go:89] "kube-scheduler-bridge-123357" [a5e91b26-feee-478a-bbeb-6f050f4fd173] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:34:37.204952  278482 system_pods.go:89] "storage-provisioner" [0c87e34a-6996-4a83-b541-24e6166e2b81] Running
	I1102 13:34:37.204959  278482 system_pods.go:126] duration metric: took 553.146681ms to wait for k8s-apps to be running ...
	I1102 13:34:37.204982  278482 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:34:37.205026  278482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:34:37.219135  278482 system_svc.go:56] duration metric: took 14.142658ms WaitForService to wait for kubelet
	I1102 13:34:37.219164  278482 kubeadm.go:587] duration metric: took 1.129769039s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:34:37.219187  278482 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:34:37.222107  278482 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:34:37.222139  278482 node_conditions.go:123] node cpu capacity is 8
	I1102 13:34:37.222153  278482 node_conditions.go:105] duration metric: took 2.959865ms to run NodePressure ...
	I1102 13:34:37.222163  278482 start.go:242] waiting for startup goroutines ...
	I1102 13:34:37.222169  278482 start.go:247] waiting for cluster config update ...
	I1102 13:34:37.222179  278482 start.go:256] writing updated cluster config ...
	I1102 13:34:37.222458  278482 ssh_runner.go:195] Run: rm -f paused
	I1102 13:34:37.226884  278482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:34:37.230996  278482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-64bzq" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:34:39.237296  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:34:36.335291  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:36.335329  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:36.335338  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:36.335344  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:36.335348  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:36.335352  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:36.335355  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:36.335358  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:36.335376  272706 retry.go:31] will retry after 2.292759669s: missing components: kube-dns
	I1102 13:34:38.633820  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:38.633856  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:38.633864  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:38.633875  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:38.633881  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:38.633887  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:38.633896  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:38.633901  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:38.633917  272706 retry.go:31] will retry after 3.615593973s: missing components: kube-dns
	I1102 13:34:38.634872  285056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:34:38.639214  285056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1102 13:34:38.639232  285056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:34:38.652894  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:34:39.290922  285056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:34:39.291012  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:39.291032  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-054159 minikube.k8s.io/updated_at=2025_11_02T13_34_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=old-k8s-version-054159 minikube.k8s.io/primary=true
	I1102 13:34:39.361633  285056 ops.go:34] apiserver oom_adj: -16
	I1102 13:34:39.361707  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:39.862072  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:40.362219  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:40.862168  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:41.362470  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:41.862465  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:42.362668  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:42.861965  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 13:34:41.736616  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:34:43.736838  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:34:42.253129  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:42.253160  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:34:42.253166  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:42.253172  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:42.253176  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:42.253179  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:42.253182  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:42.253185  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:42.253198  272706 retry.go:31] will retry after 3.240304697s: missing components: kube-dns
	I1102 13:34:42.617919  231852 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.068619998s)
	W1102 13:34:42.617965  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1102 13:34:45.118229  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:45.497426  272706 system_pods.go:86] 7 kube-system pods found
	I1102 13:34:45.497454  272706 system_pods.go:89] "coredns-66bc5c9577-b7rf8" [4ab036ef-c808-4189-a177-39eacc92b486] Running
	I1102 13:34:45.497460  272706 system_pods.go:89] "etcd-flannel-123357" [b5ef9f52-0615-40d8-9212-bfb7addacec0] Running
	I1102 13:34:45.497464  272706 system_pods.go:89] "kube-apiserver-flannel-123357" [2e6e1731-daa6-4d22-9942-4d538ae51ded] Running
	I1102 13:34:45.497470  272706 system_pods.go:89] "kube-controller-manager-flannel-123357" [6006c038-149d-4a92-b2e6-12d19b2e3e68] Running
	I1102 13:34:45.497473  272706 system_pods.go:89] "kube-proxy-qkkws" [b7c09d83-80d7-4cdf-8777-860af56b59f4] Running
	I1102 13:34:45.497476  272706 system_pods.go:89] "kube-scheduler-flannel-123357" [8fbbdd4d-f4ed-449f-8222-c0207014100f] Running
	I1102 13:34:45.497479  272706 system_pods.go:89] "storage-provisioner" [53fe955f-4acf-4e1f-85e2-251ebe8e5a39] Running
	I1102 13:34:45.497486  272706 system_pods.go:126] duration metric: took 17.607508701s to wait for k8s-apps to be running ...
	I1102 13:34:45.497493  272706 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:34:45.497535  272706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:34:45.511734  272706 system_svc.go:56] duration metric: took 14.233613ms WaitForService to wait for kubelet
	I1102 13:34:45.511766  272706 kubeadm.go:587] duration metric: took 21.440085006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:34:45.511787  272706 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:34:45.514838  272706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:34:45.514869  272706 node_conditions.go:123] node cpu capacity is 8
	I1102 13:34:45.514883  272706 node_conditions.go:105] duration metric: took 3.090973ms to run NodePressure ...
	I1102 13:34:45.514893  272706 start.go:242] waiting for startup goroutines ...
	I1102 13:34:45.514900  272706 start.go:247] waiting for cluster config update ...
	I1102 13:34:45.514909  272706 start.go:256] writing updated cluster config ...
	I1102 13:34:45.515167  272706 ssh_runner.go:195] Run: rm -f paused
	I1102 13:34:45.519702  272706 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:34:45.522893  272706 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b7rf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.527088  272706 pod_ready.go:94] pod "coredns-66bc5c9577-b7rf8" is "Ready"
	I1102 13:34:45.527107  272706 pod_ready.go:86] duration metric: took 4.192738ms for pod "coredns-66bc5c9577-b7rf8" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.529045  272706 pod_ready.go:83] waiting for pod "etcd-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.532898  272706 pod_ready.go:94] pod "etcd-flannel-123357" is "Ready"
	I1102 13:34:45.532918  272706 pod_ready.go:86] duration metric: took 3.838376ms for pod "etcd-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.534815  272706 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.538538  272706 pod_ready.go:94] pod "kube-apiserver-flannel-123357" is "Ready"
	I1102 13:34:45.538559  272706 pod_ready.go:86] duration metric: took 3.726776ms for pod "kube-apiserver-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.540349  272706 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:45.923836  272706 pod_ready.go:94] pod "kube-controller-manager-flannel-123357" is "Ready"
	I1102 13:34:45.923871  272706 pod_ready.go:86] duration metric: took 383.505064ms for pod "kube-controller-manager-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:46.124315  272706 pod_ready.go:83] waiting for pod "kube-proxy-qkkws" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:46.523034  272706 pod_ready.go:94] pod "kube-proxy-qkkws" is "Ready"
	I1102 13:34:46.523059  272706 pod_ready.go:86] duration metric: took 398.719251ms for pod "kube-proxy-qkkws" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:46.724019  272706 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:47.124127  272706 pod_ready.go:94] pod "kube-scheduler-flannel-123357" is "Ready"
	I1102 13:34:47.124155  272706 pod_ready.go:86] duration metric: took 400.110962ms for pod "kube-scheduler-flannel-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:34:47.124167  272706 pod_ready.go:40] duration metric: took 1.604439257s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:34:47.169674  272706 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:34:47.171618  272706 out.go:179] * Done! kubectl is now configured to use "flannel-123357" cluster and "default" namespace by default
	I1102 13:34:43.362353  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:43.861904  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:44.362607  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:44.862148  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:45.362547  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:45.861804  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:46.362550  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:46.862757  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:47.361885  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:47.862395  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 13:34:46.236000  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:34:48.236780  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:34:50.237283  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:34:50.120050  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1102 13:34:50.120131  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:50.120213  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:50.149386  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:34:50.149409  231852 cri.go:89] found id: "4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	I1102 13:34:50.149415  231852 cri.go:89] found id: ""
	I1102 13:34:50.149424  231852 logs.go:282] 2 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]
	I1102 13:34:50.149479  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:50.154256  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:50.158152  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:50.158224  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:50.184352  231852 cri.go:89] found id: ""
	I1102 13:34:50.184390  231852 logs.go:282] 0 containers: []
	W1102 13:34:50.184402  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:50.184413  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:50.184468  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:50.212941  231852 cri.go:89] found id: ""
	I1102 13:34:50.212973  231852 logs.go:282] 0 containers: []
	W1102 13:34:50.212982  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:50.212988  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:50.213057  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:50.243313  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:50.243344  231852 cri.go:89] found id: ""
	I1102 13:34:50.243355  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:50.243404  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:50.248007  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:50.248059  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:50.275026  231852 cri.go:89] found id: ""
	I1102 13:34:50.275048  231852 logs.go:282] 0 containers: []
	W1102 13:34:50.275055  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:50.275061  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:50.275106  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:50.303442  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:34:50.303467  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:50.303472  231852 cri.go:89] found id: ""
	I1102 13:34:50.303481  231852 logs.go:282] 2 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:50.303546  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:50.307586  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:50.311230  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:50.311282  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:50.337683  231852 cri.go:89] found id: ""
	I1102 13:34:50.337710  231852 logs.go:282] 0 containers: []
	W1102 13:34:50.337721  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:50.337728  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:50.337796  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:50.365933  231852 cri.go:89] found id: ""
	I1102 13:34:50.365960  231852 logs.go:282] 0 containers: []
	W1102 13:34:50.365970  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:50.365985  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:50.365996  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:50.421682  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:34:50.421728  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:34:50.449512  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:50.449536  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:50.476246  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:50.476274  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:50.491876  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:50.491909  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1102 13:34:48.362270  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:48.862407  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:49.362069  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:49.862620  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:50.362015  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:50.862774  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:51.362238  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:51.862762  285056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:34:51.932448  285056 kubeadm.go:1114] duration metric: took 12.641504571s to wait for elevateKubeSystemPrivileges
	I1102 13:34:51.932489  285056 kubeadm.go:403] duration metric: took 21.695750516s to StartCluster
	I1102 13:34:51.932512  285056 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:51.932605  285056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:34:51.933877  285056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:34:51.934092  285056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:34:51.934142  285056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:34:51.934191  285056 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:34:51.934276  285056 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-054159"
	I1102 13:34:51.934285  285056 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-054159"
	I1102 13:34:51.934296  285056 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-054159"
	I1102 13:34:51.934344  285056 host.go:66] Checking if "old-k8s-version-054159" exists ...
	I1102 13:34:51.934356  285056 config.go:182] Loaded profile config "old-k8s-version-054159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 13:34:51.934307  285056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-054159"
	I1102 13:34:51.934707  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:51.934830  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:51.935681  285056 out.go:179] * Verifying Kubernetes components...
	I1102 13:34:51.937110  285056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:34:51.958498  285056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:34:51.959696  285056 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-054159"
	I1102 13:34:51.959744  285056 host.go:66] Checking if "old-k8s-version-054159" exists ...
	I1102 13:34:51.960035  285056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:51.960063  285056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:34:51.960121  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:51.960312  285056 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:34:51.990601  285056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:51.990680  285056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:34:51.990781  285056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:34:51.990907  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:52.016867  285056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:34:52.040986  285056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:34:52.082239  285056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:34:52.114112  285056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:34:52.138395  285056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:34:52.282097  285056 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1102 13:34:52.283576  285056 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-054159" to be "Ready" ...
	I1102 13:34:52.499885  285056 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:34:52.501094  285056 addons.go:515] duration metric: took 566.90056ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:34:52.786885  285056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-054159" context rescaled to 1 replicas
	W1102 13:34:52.238069  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:34:54.737085  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:34:53.746894  231852 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.254961176s)
	W1102 13:34:53.746946  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:40948->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:40948->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1102 13:34:53.746955  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:34:53.746971  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:34:53.780452  231852 logs.go:123] Gathering logs for kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be] ...
	I1102 13:34:53.780480  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	W1102 13:34:53.807683  231852 logs.go:130] failed kube-apiserver [4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be": Process exited with status 1
	stdout:
	
	stderr:
	E1102 13:34:53.805688    5566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be\": container with ID starting with 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be not found: ID does not exist" containerID="4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	time="2025-11-02T13:34:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be\": container with ID starting with 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be not found: ID does not exist"
	 output: 
	** stderr ** 
	E1102 13:34:53.805688    5566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be\": container with ID starting with 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be not found: ID does not exist" containerID="4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be"
	time="2025-11-02T13:34:53Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be\": container with ID starting with 4785e723f66612b4113f433b6e51f0a5f64fcdd4c61cec8cd08c1a13d88379be not found: ID does not exist"
	
	** /stderr **
	I1102 13:34:53.807707  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:53.807719  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:53.872455  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:53.872489  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:53.903005  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:53.903032  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:34:56.505299  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:56.505717  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:56.505776  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:56.505840  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:56.533337  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:34:56.533358  231852 cri.go:89] found id: ""
	I1102 13:34:56.533365  231852 logs.go:282] 1 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa]
	I1102 13:34:56.533427  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:56.537429  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:56.537490  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:56.563754  231852 cri.go:89] found id: ""
	I1102 13:34:56.563782  231852 logs.go:282] 0 containers: []
	W1102 13:34:56.563793  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:56.563802  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:56.563857  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:56.590835  231852 cri.go:89] found id: ""
	I1102 13:34:56.590858  231852 logs.go:282] 0 containers: []
	W1102 13:34:56.590865  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:56.590871  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:56.590917  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:56.617311  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:56.617331  231852 cri.go:89] found id: ""
	I1102 13:34:56.617338  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:56.617390  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:56.621375  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:56.621436  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:56.647292  231852 cri.go:89] found id: ""
	I1102 13:34:56.647319  231852 logs.go:282] 0 containers: []
	W1102 13:34:56.647330  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:56.647340  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:56.647401  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:56.673466  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:34:56.673487  231852 cri.go:89] found id: "0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:56.673491  231852 cri.go:89] found id: ""
	I1102 13:34:56.673498  231852 logs.go:282] 2 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5]
	I1102 13:34:56.673543  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:56.677767  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:56.682034  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:56.682107  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:56.708839  231852 cri.go:89] found id: ""
	I1102 13:34:56.708863  231852 logs.go:282] 0 containers: []
	W1102 13:34:56.708870  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:56.708876  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:56.708929  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:56.736844  231852 cri.go:89] found id: ""
	I1102 13:34:56.736872  231852 logs.go:282] 0 containers: []
	W1102 13:34:56.736882  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:56.736899  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:34:56.736913  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:34:56.762935  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:56.762967  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:56.795330  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:56.795358  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1102 13:34:54.288576  285056 node_ready.go:57] node "old-k8s-version-054159" has "Ready":"False" status (will retry)
	W1102 13:34:56.787772  285056 node_ready.go:57] node "old-k8s-version-054159" has "Ready":"False" status (will retry)
	W1102 13:34:56.737188  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:34:58.737651  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:34:56.881636  231852 logs.go:123] Gathering logs for kube-controller-manager [0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5] ...
	I1102 13:34:56.881667  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0881ad9f50f175f0a0c9f33081dcefb6add513ec5cd2c5a4480a7bfeb3fb5cf5"
	I1102 13:34:56.908921  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:56.908947  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:56.964948  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:34:56.964979  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:34:56.980667  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:34:56.980699  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:34:57.043213  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:34:57.043232  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:34:57.043246  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:34:57.074817  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:34:57.074846  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:59.629333  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:34:59.629766  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:34:59.629878  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:34:59.629943  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:34:59.657100  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:34:59.657126  231852 cri.go:89] found id: ""
	I1102 13:34:59.657137  231852 logs.go:282] 1 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa]
	I1102 13:34:59.657199  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:59.661265  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:34:59.661332  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:34:59.689264  231852 cri.go:89] found id: ""
	I1102 13:34:59.689292  231852 logs.go:282] 0 containers: []
	W1102 13:34:59.689302  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:34:59.689308  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:34:59.689373  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:34:59.717833  231852 cri.go:89] found id: ""
	I1102 13:34:59.717862  231852 logs.go:282] 0 containers: []
	W1102 13:34:59.717883  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:34:59.717895  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:34:59.717958  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:34:59.747033  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:34:59.747059  231852 cri.go:89] found id: ""
	I1102 13:34:59.747072  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:34:59.747122  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:59.751166  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:34:59.751225  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:34:59.778407  231852 cri.go:89] found id: ""
	I1102 13:34:59.778437  231852 logs.go:282] 0 containers: []
	W1102 13:34:59.778447  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:34:59.778458  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:34:59.778519  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:34:59.806320  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:34:59.806341  231852 cri.go:89] found id: ""
	I1102 13:34:59.806349  231852 logs.go:282] 1 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf]
	I1102 13:34:59.806396  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:34:59.810375  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:34:59.810433  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:34:59.837044  231852 cri.go:89] found id: ""
	I1102 13:34:59.837072  231852 logs.go:282] 0 containers: []
	W1102 13:34:59.837082  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:34:59.837090  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:34:59.837148  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:34:59.863229  231852 cri.go:89] found id: ""
	I1102 13:34:59.863252  231852 logs.go:282] 0 containers: []
	W1102 13:34:59.863259  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:34:59.863268  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:34:59.863278  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:34:59.920378  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:34:59.920414  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:34:59.950486  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:34:59.950514  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:35:00.036514  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:35:00.036547  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:35:00.052957  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:35:00.052986  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:35:00.109672  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:35:00.109699  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:35:00.109721  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:00.142418  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:35:00.142449  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:00.197354  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:35:00.197389  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	W1102 13:34:59.286681  285056 node_ready.go:57] node "old-k8s-version-054159" has "Ready":"False" status (will retry)
	W1102 13:35:01.287092  285056 node_ready.go:57] node "old-k8s-version-054159" has "Ready":"False" status (will retry)
	W1102 13:35:01.237259  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:35:03.736408  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:35:02.725689  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:35:02.726110  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:35:02.726172  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:35:02.726229  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:35:02.757333  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:02.757360  231852 cri.go:89] found id: ""
	I1102 13:35:02.757368  231852 logs.go:282] 1 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa]
	I1102 13:35:02.757425  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:02.761814  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:35:02.761877  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:35:02.792817  231852 cri.go:89] found id: ""
	I1102 13:35:02.792842  231852 logs.go:282] 0 containers: []
	W1102 13:35:02.792853  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:35:02.792861  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:35:02.792915  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:35:02.823789  231852 cri.go:89] found id: ""
	I1102 13:35:02.823823  231852 logs.go:282] 0 containers: []
	W1102 13:35:02.823836  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:35:02.823844  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:35:02.823906  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:35:02.855202  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:02.855228  231852 cri.go:89] found id: ""
	I1102 13:35:02.855237  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:35:02.855298  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:02.859361  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:35:02.859431  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:35:02.891709  231852 cri.go:89] found id: ""
	I1102 13:35:02.891740  231852 logs.go:282] 0 containers: []
	W1102 13:35:02.891753  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:35:02.891761  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:35:02.891821  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:35:02.924784  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:35:02.924815  231852 cri.go:89] found id: ""
	I1102 13:35:02.924824  231852 logs.go:282] 1 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf]
	I1102 13:35:02.924880  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:02.929138  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:35:02.929204  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:35:02.959679  231852 cri.go:89] found id: ""
	I1102 13:35:02.959708  231852 logs.go:282] 0 containers: []
	W1102 13:35:02.959719  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:35:02.959735  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:35:02.959801  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:35:02.989979  231852 cri.go:89] found id: ""
	I1102 13:35:02.990008  231852 logs.go:282] 0 containers: []
	W1102 13:35:02.990019  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:35:02.990030  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:35:02.990066  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:35:03.023197  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:35:03.023225  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:35:03.124299  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:35:03.124341  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:35:03.141474  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:35:03.141507  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:35:03.201504  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:35:03.201527  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:35:03.201542  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:03.239104  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:35:03.239137  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:03.295841  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:35:03.295878  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:35:03.324510  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:35:03.324537  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:35:05.884164  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:35:05.884492  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:35:05.884541  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:35:05.884601  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:35:05.912722  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:05.912742  231852 cri.go:89] found id: ""
	I1102 13:35:05.912750  231852 logs.go:282] 1 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa]
	I1102 13:35:05.912801  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:05.916802  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:35:05.916862  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:35:05.943525  231852 cri.go:89] found id: ""
	I1102 13:35:05.943551  231852 logs.go:282] 0 containers: []
	W1102 13:35:05.943573  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:35:05.943581  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:35:05.943641  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:35:05.970209  231852 cri.go:89] found id: ""
	I1102 13:35:05.970235  231852 logs.go:282] 0 containers: []
	W1102 13:35:05.970245  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:35:05.970252  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:35:05.970310  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:35:05.996961  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:05.996994  231852 cri.go:89] found id: ""
	I1102 13:35:05.997003  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:35:05.997060  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:06.001409  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:35:06.001485  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:35:06.032210  231852 cri.go:89] found id: ""
	I1102 13:35:06.032245  231852 logs.go:282] 0 containers: []
	W1102 13:35:06.032253  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:35:06.032259  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:35:06.032313  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:35:06.059641  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:35:06.059663  231852 cri.go:89] found id: ""
	I1102 13:35:06.059672  231852 logs.go:282] 1 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf]
	I1102 13:35:06.059733  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:06.063663  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:35:06.063721  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:35:06.089877  231852 cri.go:89] found id: ""
	I1102 13:35:06.089902  231852 logs.go:282] 0 containers: []
	W1102 13:35:06.089910  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:35:06.089916  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:35:06.089970  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:35:06.116839  231852 cri.go:89] found id: ""
	I1102 13:35:06.116868  231852 logs.go:282] 0 containers: []
	W1102 13:35:06.116879  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:35:06.116890  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:35:06.116907  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:35:06.132441  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:35:06.132473  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:35:06.189720  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:35:06.189747  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:35:06.189760  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:06.221267  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:35:06.221296  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:06.275489  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:35:06.275524  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:35:06.303009  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:35:06.303036  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:35:06.362235  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:35:06.362272  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:35:06.392734  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:35:06.392766  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1102 13:35:03.787135  285056 node_ready.go:57] node "old-k8s-version-054159" has "Ready":"False" status (will retry)
	I1102 13:35:04.786780  285056 node_ready.go:49] node "old-k8s-version-054159" is "Ready"
	I1102 13:35:04.786813  285056 node_ready.go:38] duration metric: took 12.503209763s for node "old-k8s-version-054159" to be "Ready" ...
	I1102 13:35:04.786831  285056 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:35:04.786883  285056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:35:04.802160  285056 api_server.go:72] duration metric: took 12.867901116s to wait for apiserver process to appear ...
	I1102 13:35:04.802191  285056 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:35:04.802213  285056 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:35:04.807984  285056 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:35:04.809233  285056 api_server.go:141] control plane version: v1.28.0
	I1102 13:35:04.809264  285056 api_server.go:131] duration metric: took 7.064094ms to wait for apiserver health ...
	I1102 13:35:04.809274  285056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:35:04.814040  285056 system_pods.go:59] 8 kube-system pods found
	I1102 13:35:04.814083  285056 system_pods.go:61] "coredns-5dd5756b68-th5sb" [824870c1-a7b3-46b0-90bf-8b731c8a4e4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:35:04.814091  285056 system_pods.go:61] "etcd-old-k8s-version-054159" [aaaa6d82-c2fd-455f-a575-2c35e85f702d] Running
	I1102 13:35:04.814099  285056 system_pods.go:61] "kindnet-cmgvz" [8f518a86-d135-4e6f-8945-a200f813f3cf] Running
	I1102 13:35:04.814106  285056 system_pods.go:61] "kube-apiserver-old-k8s-version-054159" [efa92822-7931-4d0d-b0ee-e86bf57ca6b9] Running
	I1102 13:35:04.814111  285056 system_pods.go:61] "kube-controller-manager-old-k8s-version-054159" [5795fcb3-def6-4b6b-ba7e-b4b452e1b1b6] Running
	I1102 13:35:04.814132  285056 system_pods.go:61] "kube-proxy-l2sh4" [d388d3f4-5f54-4cdc-8b0f-ea4929149bd5] Running
	I1102 13:35:04.814137  285056 system_pods.go:61] "kube-scheduler-old-k8s-version-054159" [48a39d56-979f-493d-81be-de1eb581247e] Running
	I1102 13:35:04.814144  285056 system_pods.go:61] "storage-provisioner" [bc262f10-de9b-4454-afcd-05e4195906ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:35:04.814158  285056 system_pods.go:74] duration metric: took 4.876888ms to wait for pod list to return data ...
	I1102 13:35:04.814172  285056 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:35:04.816477  285056 default_sa.go:45] found service account: "default"
	I1102 13:35:04.816498  285056 default_sa.go:55] duration metric: took 2.319504ms for default service account to be created ...
	I1102 13:35:04.816509  285056 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:35:04.822007  285056 system_pods.go:86] 8 kube-system pods found
	I1102 13:35:04.822040  285056 system_pods.go:89] "coredns-5dd5756b68-th5sb" [824870c1-a7b3-46b0-90bf-8b731c8a4e4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:35:04.822050  285056 system_pods.go:89] "etcd-old-k8s-version-054159" [aaaa6d82-c2fd-455f-a575-2c35e85f702d] Running
	I1102 13:35:04.822060  285056 system_pods.go:89] "kindnet-cmgvz" [8f518a86-d135-4e6f-8945-a200f813f3cf] Running
	I1102 13:35:04.822066  285056 system_pods.go:89] "kube-apiserver-old-k8s-version-054159" [efa92822-7931-4d0d-b0ee-e86bf57ca6b9] Running
	I1102 13:35:04.822072  285056 system_pods.go:89] "kube-controller-manager-old-k8s-version-054159" [5795fcb3-def6-4b6b-ba7e-b4b452e1b1b6] Running
	I1102 13:35:04.822077  285056 system_pods.go:89] "kube-proxy-l2sh4" [d388d3f4-5f54-4cdc-8b0f-ea4929149bd5] Running
	I1102 13:35:04.822082  285056 system_pods.go:89] "kube-scheduler-old-k8s-version-054159" [48a39d56-979f-493d-81be-de1eb581247e] Running
	I1102 13:35:04.822092  285056 system_pods.go:89] "storage-provisioner" [bc262f10-de9b-4454-afcd-05e4195906ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:35:04.822117  285056 retry.go:31] will retry after 306.125508ms: missing components: kube-dns
	I1102 13:35:05.132491  285056 system_pods.go:86] 8 kube-system pods found
	I1102 13:35:05.132525  285056 system_pods.go:89] "coredns-5dd5756b68-th5sb" [824870c1-a7b3-46b0-90bf-8b731c8a4e4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:35:05.132531  285056 system_pods.go:89] "etcd-old-k8s-version-054159" [aaaa6d82-c2fd-455f-a575-2c35e85f702d] Running
	I1102 13:35:05.132536  285056 system_pods.go:89] "kindnet-cmgvz" [8f518a86-d135-4e6f-8945-a200f813f3cf] Running
	I1102 13:35:05.132539  285056 system_pods.go:89] "kube-apiserver-old-k8s-version-054159" [efa92822-7931-4d0d-b0ee-e86bf57ca6b9] Running
	I1102 13:35:05.132543  285056 system_pods.go:89] "kube-controller-manager-old-k8s-version-054159" [5795fcb3-def6-4b6b-ba7e-b4b452e1b1b6] Running
	I1102 13:35:05.132546  285056 system_pods.go:89] "kube-proxy-l2sh4" [d388d3f4-5f54-4cdc-8b0f-ea4929149bd5] Running
	I1102 13:35:05.132549  285056 system_pods.go:89] "kube-scheduler-old-k8s-version-054159" [48a39d56-979f-493d-81be-de1eb581247e] Running
	I1102 13:35:05.132554  285056 system_pods.go:89] "storage-provisioner" [bc262f10-de9b-4454-afcd-05e4195906ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:35:05.132604  285056 retry.go:31] will retry after 303.011679ms: missing components: kube-dns
	I1102 13:35:05.439741  285056 system_pods.go:86] 8 kube-system pods found
	I1102 13:35:05.439778  285056 system_pods.go:89] "coredns-5dd5756b68-th5sb" [824870c1-a7b3-46b0-90bf-8b731c8a4e4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:35:05.439787  285056 system_pods.go:89] "etcd-old-k8s-version-054159" [aaaa6d82-c2fd-455f-a575-2c35e85f702d] Running
	I1102 13:35:05.439793  285056 system_pods.go:89] "kindnet-cmgvz" [8f518a86-d135-4e6f-8945-a200f813f3cf] Running
	I1102 13:35:05.439798  285056 system_pods.go:89] "kube-apiserver-old-k8s-version-054159" [efa92822-7931-4d0d-b0ee-e86bf57ca6b9] Running
	I1102 13:35:05.439805  285056 system_pods.go:89] "kube-controller-manager-old-k8s-version-054159" [5795fcb3-def6-4b6b-ba7e-b4b452e1b1b6] Running
	I1102 13:35:05.439809  285056 system_pods.go:89] "kube-proxy-l2sh4" [d388d3f4-5f54-4cdc-8b0f-ea4929149bd5] Running
	I1102 13:35:05.439814  285056 system_pods.go:89] "kube-scheduler-old-k8s-version-054159" [48a39d56-979f-493d-81be-de1eb581247e] Running
	I1102 13:35:05.439821  285056 system_pods.go:89] "storage-provisioner" [bc262f10-de9b-4454-afcd-05e4195906ff] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:35:05.439841  285056 retry.go:31] will retry after 411.578466ms: missing components: kube-dns
	I1102 13:35:05.856064  285056 system_pods.go:86] 8 kube-system pods found
	I1102 13:35:05.856111  285056 system_pods.go:89] "coredns-5dd5756b68-th5sb" [824870c1-a7b3-46b0-90bf-8b731c8a4e4a] Running
	I1102 13:35:05.856119  285056 system_pods.go:89] "etcd-old-k8s-version-054159" [aaaa6d82-c2fd-455f-a575-2c35e85f702d] Running
	I1102 13:35:05.856125  285056 system_pods.go:89] "kindnet-cmgvz" [8f518a86-d135-4e6f-8945-a200f813f3cf] Running
	I1102 13:35:05.856132  285056 system_pods.go:89] "kube-apiserver-old-k8s-version-054159" [efa92822-7931-4d0d-b0ee-e86bf57ca6b9] Running
	I1102 13:35:05.856139  285056 system_pods.go:89] "kube-controller-manager-old-k8s-version-054159" [5795fcb3-def6-4b6b-ba7e-b4b452e1b1b6] Running
	I1102 13:35:05.856148  285056 system_pods.go:89] "kube-proxy-l2sh4" [d388d3f4-5f54-4cdc-8b0f-ea4929149bd5] Running
	I1102 13:35:05.856156  285056 system_pods.go:89] "kube-scheduler-old-k8s-version-054159" [48a39d56-979f-493d-81be-de1eb581247e] Running
	I1102 13:35:05.856165  285056 system_pods.go:89] "storage-provisioner" [bc262f10-de9b-4454-afcd-05e4195906ff] Running
	I1102 13:35:05.856174  285056 system_pods.go:126] duration metric: took 1.03965812s to wait for k8s-apps to be running ...
	I1102 13:35:05.856187  285056 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:35:05.856247  285056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:35:05.869406  285056 system_svc.go:56] duration metric: took 13.208266ms WaitForService to wait for kubelet
	I1102 13:35:05.869436  285056 kubeadm.go:587] duration metric: took 13.935264377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:35:05.869458  285056 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:35:05.872035  285056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:35:05.872059  285056 node_conditions.go:123] node cpu capacity is 8
	I1102 13:35:05.872088  285056 node_conditions.go:105] duration metric: took 2.61216ms to run NodePressure ...
	I1102 13:35:05.872105  285056 start.go:242] waiting for startup goroutines ...
	I1102 13:35:05.872119  285056 start.go:247] waiting for cluster config update ...
	I1102 13:35:05.872134  285056 start.go:256] writing updated cluster config ...
	I1102 13:35:05.872405  285056 ssh_runner.go:195] Run: rm -f paused
	I1102 13:35:05.876026  285056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:35:05.879738  285056 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-th5sb" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.883548  285056 pod_ready.go:94] pod "coredns-5dd5756b68-th5sb" is "Ready"
	I1102 13:35:05.883595  285056 pod_ready.go:86] duration metric: took 3.836145ms for pod "coredns-5dd5756b68-th5sb" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.886056  285056 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.889692  285056 pod_ready.go:94] pod "etcd-old-k8s-version-054159" is "Ready"
	I1102 13:35:05.889710  285056 pod_ready.go:86] duration metric: took 3.636858ms for pod "etcd-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.892089  285056 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.895904  285056 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-054159" is "Ready"
	I1102 13:35:05.895923  285056 pod_ready.go:86] duration metric: took 3.812064ms for pod "kube-apiserver-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:05.898647  285056 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:06.279961  285056 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-054159" is "Ready"
	I1102 13:35:06.279993  285056 pod_ready.go:86] duration metric: took 381.327485ms for pod "kube-controller-manager-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:06.480723  285056 pod_ready.go:83] waiting for pod "kube-proxy-l2sh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:06.879904  285056 pod_ready.go:94] pod "kube-proxy-l2sh4" is "Ready"
	I1102 13:35:06.879931  285056 pod_ready.go:86] duration metric: took 399.181043ms for pod "kube-proxy-l2sh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:07.080626  285056 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:07.479889  285056 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-054159" is "Ready"
	I1102 13:35:07.479916  285056 pod_ready.go:86] duration metric: took 399.265653ms for pod "kube-scheduler-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:07.479928  285056 pod_ready.go:40] duration metric: took 1.60387711s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:35:07.523247  285056 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1102 13:35:07.525069  285056 out.go:203] 
	W1102 13:35:07.526384  285056 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1102 13:35:07.527393  285056 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1102 13:35:07.528590  285056 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-054159" cluster and "default" namespace by default
	W1102 13:35:06.236799  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:35:08.736910  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:35:08.992685  231852 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 13:35:08.993774  231852 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1102 13:35:08.993826  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1102 13:35:08.993869  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1102 13:35:09.024618  231852 cri.go:89] found id: "cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:09.024648  231852 cri.go:89] found id: ""
	I1102 13:35:09.024658  231852 logs.go:282] 1 containers: [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa]
	I1102 13:35:09.024722  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:09.029540  231852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1102 13:35:09.029643  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1102 13:35:09.059683  231852 cri.go:89] found id: ""
	I1102 13:35:09.059714  231852 logs.go:282] 0 containers: []
	W1102 13:35:09.059723  231852 logs.go:284] No container was found matching "etcd"
	I1102 13:35:09.059732  231852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1102 13:35:09.059790  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1102 13:35:09.087592  231852 cri.go:89] found id: ""
	I1102 13:35:09.087619  231852 logs.go:282] 0 containers: []
	W1102 13:35:09.087629  231852 logs.go:284] No container was found matching "coredns"
	I1102 13:35:09.087636  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1102 13:35:09.087689  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1102 13:35:09.115359  231852 cri.go:89] found id: "548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:09.115387  231852 cri.go:89] found id: ""
	I1102 13:35:09.115394  231852 logs.go:282] 1 containers: [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a]
	I1102 13:35:09.115461  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:09.119317  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1102 13:35:09.119384  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1102 13:35:09.147317  231852 cri.go:89] found id: ""
	I1102 13:35:09.147349  231852 logs.go:282] 0 containers: []
	W1102 13:35:09.147361  231852 logs.go:284] No container was found matching "kube-proxy"
	I1102 13:35:09.147369  231852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1102 13:35:09.147429  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1102 13:35:09.176835  231852 cri.go:89] found id: "6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	I1102 13:35:09.176857  231852 cri.go:89] found id: ""
	I1102 13:35:09.176866  231852 logs.go:282] 1 containers: [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf]
	I1102 13:35:09.176922  231852 ssh_runner.go:195] Run: which crictl
	I1102 13:35:09.180894  231852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1102 13:35:09.180946  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1102 13:35:09.212663  231852 cri.go:89] found id: ""
	I1102 13:35:09.212690  231852 logs.go:282] 0 containers: []
	W1102 13:35:09.212700  231852 logs.go:284] No container was found matching "kindnet"
	I1102 13:35:09.212708  231852 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1102 13:35:09.212762  231852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1102 13:35:09.247546  231852 cri.go:89] found id: ""
	I1102 13:35:09.247615  231852 logs.go:282] 0 containers: []
	W1102 13:35:09.247630  231852 logs.go:284] No container was found matching "storage-provisioner"
	I1102 13:35:09.247639  231852 logs.go:123] Gathering logs for CRI-O ...
	I1102 13:35:09.247654  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1102 13:35:09.332845  231852 logs.go:123] Gathering logs for container status ...
	I1102 13:35:09.332886  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1102 13:35:09.368831  231852 logs.go:123] Gathering logs for kubelet ...
	I1102 13:35:09.368860  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1102 13:35:09.469911  231852 logs.go:123] Gathering logs for dmesg ...
	I1102 13:35:09.469944  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1102 13:35:09.485388  231852 logs.go:123] Gathering logs for describe nodes ...
	I1102 13:35:09.485414  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1102 13:35:09.544160  231852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1102 13:35:09.544188  231852 logs.go:123] Gathering logs for kube-apiserver [cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa] ...
	I1102 13:35:09.544204  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cf72004d081675f385063d7d114ce30d38fe09325e13a9a6de6c10e1ad9ccdfa"
	I1102 13:35:09.584559  231852 logs.go:123] Gathering logs for kube-scheduler [548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a] ...
	I1102 13:35:09.584598  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 548f986afcc42449b095854fe0c879a9785a5f2c02337b761182d822cb74938a"
	I1102 13:35:09.642881  231852 logs.go:123] Gathering logs for kube-controller-manager [6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf] ...
	I1102 13:35:09.642913  231852 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6cd4d7150e03009009d994235d48166dc8a35ee9bb84df1b1ca53de794f2cbdf"
	W1102 13:35:10.737090  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	W1102 13:35:13.237407  278482 pod_ready.go:104] pod "coredns-66bc5c9577-64bzq" is not "Ready", error: <nil>
	I1102 13:35:13.737464  278482 pod_ready.go:94] pod "coredns-66bc5c9577-64bzq" is "Ready"
	I1102 13:35:13.737496  278482 pod_ready.go:86] duration metric: took 36.506471468s for pod "coredns-66bc5c9577-64bzq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.737510  278482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kfmcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.739225  278482 pod_ready.go:99] pod "coredns-66bc5c9577-kfmcz" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-kfmcz" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-kfmcz" not found
	I1102 13:35:13.739242  278482 pod_ready.go:86] duration metric: took 1.726032ms for pod "coredns-66bc5c9577-kfmcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.741651  278482 pod_ready.go:83] waiting for pod "etcd-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.745317  278482 pod_ready.go:94] pod "etcd-bridge-123357" is "Ready"
	I1102 13:35:13.745341  278482 pod_ready.go:86] duration metric: took 3.670818ms for pod "etcd-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.747108  278482 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.750645  278482 pod_ready.go:94] pod "kube-apiserver-bridge-123357" is "Ready"
	I1102 13:35:13.750665  278482 pod_ready.go:86] duration metric: took 3.537294ms for pod "kube-apiserver-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:13.752380  278482 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:14.135208  278482 pod_ready.go:94] pod "kube-controller-manager-bridge-123357" is "Ready"
	I1102 13:35:14.135238  278482 pod_ready.go:86] duration metric: took 382.837663ms for pod "kube-controller-manager-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:14.335533  278482 pod_ready.go:83] waiting for pod "kube-proxy-hhcgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:14.735199  278482 pod_ready.go:94] pod "kube-proxy-hhcgf" is "Ready"
	I1102 13:35:14.735231  278482 pod_ready.go:86] duration metric: took 399.670762ms for pod "kube-proxy-hhcgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:14.935290  278482 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:15.335645  278482 pod_ready.go:94] pod "kube-scheduler-bridge-123357" is "Ready"
	I1102 13:35:15.335676  278482 pod_ready.go:86] duration metric: took 400.355545ms for pod "kube-scheduler-bridge-123357" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:35:15.335690  278482 pod_ready.go:40] duration metric: took 38.108773897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:35:15.385175  278482 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:35:15.386713  278482 out.go:179] * Done! kubectl is now configured to use "bridge-123357" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:35:04 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:04.798896632Z" level=info msg="Starting container: a929086f732d861a631392c7bb212948f71426bd11db0cbce1e27df330431046" id=558da728-45c0-491d-add7-3ce2ff8be13e name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:35:04 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:04.800906625Z" level=info msg="Started container" PID=2173 containerID=a929086f732d861a631392c7bb212948f71426bd11db0cbce1e27df330431046 description=kube-system/coredns-5dd5756b68-th5sb/coredns id=558da728-45c0-491d-add7-3ce2ff8be13e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a5e9e11e6b0342f894ddb1753e835228b66ce1f6e36516496ec42a065f2b3fa
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.965292993Z" level=info msg="Running pod sandbox: default/busybox/POD" id=54289181-6f3b-4fe6-b081-4667538ae805 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.965421239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.970441559Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b242f54ae659f3549126818d8ff438f733dab0c5aaa2a3e359ea15af402f7de UID:b69c12c6-19df-47c3-8096-02b70e53bbd1 NetNS:/var/run/netns/a7c93470-8b8b-46e2-b37c-4c309fe6fc82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0011027d8}] Aliases:map[]}"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.970468976Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.980457245Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3b242f54ae659f3549126818d8ff438f733dab0c5aaa2a3e359ea15af402f7de UID:b69c12c6-19df-47c3-8096-02b70e53bbd1 NetNS:/var/run/netns/a7c93470-8b8b-46e2-b37c-4c309fe6fc82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0011027d8}] Aliases:map[]}"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.980622449Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.981387499Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.982495947Z" level=info msg="Ran pod sandbox 3b242f54ae659f3549126818d8ff438f733dab0c5aaa2a3e359ea15af402f7de with infra container: default/busybox/POD" id=54289181-6f3b-4fe6-b081-4667538ae805 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.983684436Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1d136896-4d2a-4e78-b218-3906382e3e78 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.983810575Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1d136896-4d2a-4e78-b218-3906382e3e78 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.983859465Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1d136896-4d2a-4e78-b218-3906382e3e78 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.984480807Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14ddca69-d5f1-4c0e-84e8-4dedd1bfae17 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:35:07 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:07.988369391Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.372142112Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=14ddca69-d5f1-4c0e-84e8-4dedd1bfae17 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.373136163Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=98e8bf5b-5bfa-4c95-8fce-912db124082c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.374649673Z" level=info msg="Creating container: default/busybox/busybox" id=e1bac898-f10e-445e-8485-afcb75a79dbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.374779673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.379056749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.379523434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.407126955Z" level=info msg="Created container f8f017800b1b2081f0ddf87b8582470d18512abf1e983d393301775ff33b6b03: default/busybox/busybox" id=e1bac898-f10e-445e-8485-afcb75a79dbe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.407848299Z" level=info msg="Starting container: f8f017800b1b2081f0ddf87b8582470d18512abf1e983d393301775ff33b6b03" id=5969f72d-15a4-4a62-b54e-353a747286e6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:35:09 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:09.409721469Z" level=info msg="Started container" PID=2249 containerID=f8f017800b1b2081f0ddf87b8582470d18512abf1e983d393301775ff33b6b03 description=default/busybox/busybox id=5969f72d-15a4-4a62-b54e-353a747286e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b242f54ae659f3549126818d8ff438f733dab0c5aaa2a3e359ea15af402f7de
	Nov 02 13:35:15 old-k8s-version-054159 crio[802]: time="2025-11-02T13:35:15.75721471Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	f8f017800b1b2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   3b242f54ae659       busybox                                          default
	a929086f732d8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   4a5e9e11e6b03       coredns-5dd5756b68-th5sb                         kube-system
	bb2bb52a057a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   5bf65e5a25b80       storage-provisioner                              kube-system
	923b5f0dba095       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   47b00d37f1da8       kindnet-cmgvz                                    kube-system
	84d9a819681fe       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   197b867361c7e       kube-proxy-l2sh4                                 kube-system
	80de28a95fb9c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   37b25d12c4e78       kube-apiserver-old-k8s-version-054159            kube-system
	c4c18bd3455bf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   7265a765c9a90       etcd-old-k8s-version-054159                      kube-system
	8ce25151ce1e0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   e457be13e7b14       kube-controller-manager-old-k8s-version-054159   kube-system
	73eccdf096349       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   b838a845238b2       kube-scheduler-old-k8s-version-054159            kube-system
	
	
	==> coredns [a929086f732d861a631392c7bb212948f71426bd11db0cbce1e27df330431046] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35468 - 34094 "HINFO IN 2390955543217008110.7653823104009999407. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016625671s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-054159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_34_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:34:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:35:09 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:35:09 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:35:09 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:35:09 +0000   Sun, 02 Nov 2025 13:35:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054159
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4d09a63f-c542-4c8f-a08b-d437451b349c
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-th5sb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-054159                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-cmgvz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-054159             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-054159    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-l2sh4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-054159             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-054159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-054159 event: Registered Node old-k8s-version-054159 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-054159 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[ +16.382292] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 12:51] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	
	
	==> etcd [c4c18bd3455bffc9402ca4c3ed555a283b452355506d836e06d2d7fa44e4c8fa] <==
	{"level":"info","ts":"2025-11-02T13:34:33.999454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-02T13:34:33.999608Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-02T13:34:34.001586Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-02T13:34:34.001663Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:34:34.0017Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:34:34.001819Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-02T13:34:34.001881Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-02T13:34:34.091245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-02T13:34:34.09129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-02T13:34:34.091316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-02T13:34:34.09133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-02T13:34:34.091335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T13:34:34.091343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-02T13:34:34.09135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T13:34:34.09217Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-054159 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T13:34:34.092241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:34:34.092314Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T13:34:34.092355Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:34:34.092621Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T13:34:34.092645Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-02T13:34:34.09318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T13:34:34.093271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T13:34:34.093299Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T13:34:34.093734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T13:34:34.094011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 13:35:17 up  1:17,  0 user,  load average: 3.80, 3.87, 2.42
	Linux old-k8s-version-054159 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [923b5f0dba095288516f555d48b7b6ea2aea0d9a6390544c790ab58d61f9abc4] <==
	I1102 13:34:53.988494       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:34:53.988742       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:34:53.988893       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:34:53.988913       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:34:53.988935       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:34:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:34:54.287767       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:34:54.287820       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:34:54.287832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:34:54.287942       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:34:54.588088       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:34:54.590452       1 metrics.go:72] Registering metrics
	I1102 13:34:54.590737       1 controller.go:711] "Syncing nftables rules"
	I1102 13:35:04.290667       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:35:04.290734       1 main.go:301] handling current node
	I1102 13:35:14.291185       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:35:14.291220       1 main.go:301] handling current node
	
	
	==> kube-apiserver [80de28a95fb9c3703f1a4edb1f41b5b10b4acae834482f101d2e6a28a7387f72] <==
	I1102 13:34:35.488808       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1102 13:34:35.488904       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1102 13:34:35.489024       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 13:34:35.489922       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 13:34:35.489942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 13:34:35.489970       1 aggregator.go:166] initial CRD sync complete...
	I1102 13:34:35.489981       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 13:34:35.489987       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:34:35.489995       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:34:35.690849       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:34:36.395619       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:34:36.399178       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:34:36.399265       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:34:36.832773       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:34:36.866007       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:34:36.986606       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:34:36.993050       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 13:34:36.994132       1 controller.go:624] quota admission added evaluator for: endpoints
	I1102 13:34:36.998613       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:34:37.418328       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 13:34:38.406658       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 13:34:38.416430       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:34:38.425974       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1102 13:34:51.198818       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1102 13:34:52.001426       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8ce25151ce1e07c9df3cabbdb6a40f524e8824ab8593970e3e7de682399563f2] <==
	I1102 13:34:51.282207       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1102 13:34:51.294946       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1102 13:34:51.301049       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:34:51.398843       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:34:51.715184       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:34:51.745073       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:34:51.745098       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 13:34:52.013637       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l2sh4"
	I1102 13:34:52.017225       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cmgvz"
	I1102 13:34:52.056737       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mbfk9"
	I1102 13:34:52.064982       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-th5sb"
	I1102 13:34:52.073366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="871.070317ms"
	I1102 13:34:52.082534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.11309ms"
	I1102 13:34:52.082667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.764µs"
	I1102 13:34:52.312009       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1102 13:34:52.321773       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mbfk9"
	I1102 13:34:52.329898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.413229ms"
	I1102 13:34:52.344436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.4142ms"
	I1102 13:34:52.358604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.007029ms"
	I1102 13:34:52.358714       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.45µs"
	I1102 13:35:04.451696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.952µs"
	I1102 13:35:04.466982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.783µs"
	I1102 13:35:05.576494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.454996ms"
	I1102 13:35:05.576633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.393µs"
	I1102 13:35:06.158841       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [84d9a819681fefd50cf8b4e9fd998451e197b5bf523e7990f693b8a9c3e9a8b4] <==
	I1102 13:34:52.417885       1 server_others.go:69] "Using iptables proxy"
	I1102 13:34:52.427734       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 13:34:52.446316       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:34:52.449266       1 server_others.go:152] "Using iptables Proxier"
	I1102 13:34:52.449315       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 13:34:52.449324       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 13:34:52.449361       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 13:34:52.449677       1 server.go:846] "Version info" version="v1.28.0"
	I1102 13:34:52.449706       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:34:52.450361       1 config.go:97] "Starting endpoint slice config controller"
	I1102 13:34:52.450400       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 13:34:52.450386       1 config.go:188] "Starting service config controller"
	I1102 13:34:52.450407       1 config.go:315] "Starting node config controller"
	I1102 13:34:52.450458       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 13:34:52.450421       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 13:34:52.551257       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 13:34:52.551368       1 shared_informer.go:318] Caches are synced for node config
	I1102 13:34:52.551368       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [73eccdf096349fdcce8d28ae1f97082274030bf03f16c81b595fb9049c632fe1] <==
	W1102 13:34:35.449262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1102 13:34:35.449456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1102 13:34:35.449466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1102 13:34:35.449486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1102 13:34:35.449628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1102 13:34:35.449674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1102 13:34:35.449836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1102 13:34:35.449863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1102 13:34:36.359742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1102 13:34:36.359900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1102 13:34:36.379653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1102 13:34:36.379702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1102 13:34:36.404893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1102 13:34:36.404997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1102 13:34:36.428527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1102 13:34:36.428601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1102 13:34:36.525779       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1102 13:34:36.525819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1102 13:34:36.581451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1102 13:34:36.581494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1102 13:34:36.600166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1102 13:34:36.600203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1102 13:34:36.807909       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1102 13:34:36.807938       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1102 13:34:39.846725       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 13:34:51 old-k8s-version-054159 kubelet[1408]: I1102 13:34:51.252092    1408 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.020545    1408 topology_manager.go:215] "Topology Admit Handler" podUID="d388d3f4-5f54-4cdc-8b0f-ea4929149bd5" podNamespace="kube-system" podName="kube-proxy-l2sh4"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.026161    1408 topology_manager.go:215] "Topology Admit Handler" podUID="8f518a86-d135-4e6f-8945-a200f813f3cf" podNamespace="kube-system" podName="kindnet-cmgvz"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062729    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d388d3f4-5f54-4cdc-8b0f-ea4929149bd5-kube-proxy\") pod \"kube-proxy-l2sh4\" (UID: \"d388d3f4-5f54-4cdc-8b0f-ea4929149bd5\") " pod="kube-system/kube-proxy-l2sh4"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062789    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8f518a86-d135-4e6f-8945-a200f813f3cf-cni-cfg\") pod \"kindnet-cmgvz\" (UID: \"8f518a86-d135-4e6f-8945-a200f813f3cf\") " pod="kube-system/kindnet-cmgvz"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062817    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f518a86-d135-4e6f-8945-a200f813f3cf-xtables-lock\") pod \"kindnet-cmgvz\" (UID: \"8f518a86-d135-4e6f-8945-a200f813f3cf\") " pod="kube-system/kindnet-cmgvz"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062844    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f518a86-d135-4e6f-8945-a200f813f3cf-lib-modules\") pod \"kindnet-cmgvz\" (UID: \"8f518a86-d135-4e6f-8945-a200f813f3cf\") " pod="kube-system/kindnet-cmgvz"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062882    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d388d3f4-5f54-4cdc-8b0f-ea4929149bd5-lib-modules\") pod \"kube-proxy-l2sh4\" (UID: \"d388d3f4-5f54-4cdc-8b0f-ea4929149bd5\") " pod="kube-system/kube-proxy-l2sh4"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062924    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62mk\" (UniqueName: \"kubernetes.io/projected/d388d3f4-5f54-4cdc-8b0f-ea4929149bd5-kube-api-access-j62mk\") pod \"kube-proxy-l2sh4\" (UID: \"d388d3f4-5f54-4cdc-8b0f-ea4929149bd5\") " pod="kube-system/kube-proxy-l2sh4"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062956    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d388d3f4-5f54-4cdc-8b0f-ea4929149bd5-xtables-lock\") pod \"kube-proxy-l2sh4\" (UID: \"d388d3f4-5f54-4cdc-8b0f-ea4929149bd5\") " pod="kube-system/kube-proxy-l2sh4"
	Nov 02 13:34:52 old-k8s-version-054159 kubelet[1408]: I1102 13:34:52.062986    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwwx\" (UniqueName: \"kubernetes.io/projected/8f518a86-d135-4e6f-8945-a200f813f3cf-kube-api-access-5vwwx\") pod \"kindnet-cmgvz\" (UID: \"8f518a86-d135-4e6f-8945-a200f813f3cf\") " pod="kube-system/kindnet-cmgvz"
	Nov 02 13:34:54 old-k8s-version-054159 kubelet[1408]: I1102 13:34:54.547138    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l2sh4" podStartSLOduration=2.547076032 podCreationTimestamp="2025-11-02 13:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:34:52.532315686 +0000 UTC m=+14.150278031" watchObservedRunningTime="2025-11-02 13:34:54.547076032 +0000 UTC m=+16.165038379"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.428626    1408 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.450116    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cmgvz" podStartSLOduration=10.999741485 podCreationTimestamp="2025-11-02 13:34:52 +0000 UTC" firstStartedPulling="2025-11-02 13:34:52.341485593 +0000 UTC m=+13.959447929" lastFinishedPulling="2025-11-02 13:34:53.791808563 +0000 UTC m=+15.409770903" observedRunningTime="2025-11-02 13:34:54.548531076 +0000 UTC m=+16.166493445" watchObservedRunningTime="2025-11-02 13:35:04.450064459 +0000 UTC m=+26.068026805"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.450336    1408 topology_manager.go:215] "Topology Admit Handler" podUID="bc262f10-de9b-4454-afcd-05e4195906ff" podNamespace="kube-system" podName="storage-provisioner"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.451615    1408 topology_manager.go:215] "Topology Admit Handler" podUID="824870c1-a7b3-46b0-90bf-8b731c8a4e4a" podNamespace="kube-system" podName="coredns-5dd5756b68-th5sb"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.558820    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bc262f10-de9b-4454-afcd-05e4195906ff-tmp\") pod \"storage-provisioner\" (UID: \"bc262f10-de9b-4454-afcd-05e4195906ff\") " pod="kube-system/storage-provisioner"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.558881    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/824870c1-a7b3-46b0-90bf-8b731c8a4e4a-config-volume\") pod \"coredns-5dd5756b68-th5sb\" (UID: \"824870c1-a7b3-46b0-90bf-8b731c8a4e4a\") " pod="kube-system/coredns-5dd5756b68-th5sb"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.558988    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxn4\" (UniqueName: \"kubernetes.io/projected/bc262f10-de9b-4454-afcd-05e4195906ff-kube-api-access-2pxn4\") pod \"storage-provisioner\" (UID: \"bc262f10-de9b-4454-afcd-05e4195906ff\") " pod="kube-system/storage-provisioner"
	Nov 02 13:35:04 old-k8s-version-054159 kubelet[1408]: I1102 13:35:04.559053    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ltp\" (UniqueName: \"kubernetes.io/projected/824870c1-a7b3-46b0-90bf-8b731c8a4e4a-kube-api-access-d2ltp\") pod \"coredns-5dd5756b68-th5sb\" (UID: \"824870c1-a7b3-46b0-90bf-8b731c8a4e4a\") " pod="kube-system/coredns-5dd5756b68-th5sb"
	Nov 02 13:35:05 old-k8s-version-054159 kubelet[1408]: I1102 13:35:05.560645    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.560595801 podCreationTimestamp="2025-11-02 13:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:05.560441258 +0000 UTC m=+27.178403605" watchObservedRunningTime="2025-11-02 13:35:05.560595801 +0000 UTC m=+27.178558147"
	Nov 02 13:35:05 old-k8s-version-054159 kubelet[1408]: I1102 13:35:05.570132    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-th5sb" podStartSLOduration=13.570081709 podCreationTimestamp="2025-11-02 13:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:05.569924147 +0000 UTC m=+27.187886495" watchObservedRunningTime="2025-11-02 13:35:05.570081709 +0000 UTC m=+27.188044055"
	Nov 02 13:35:07 old-k8s-version-054159 kubelet[1408]: I1102 13:35:07.663405    1408 topology_manager.go:215] "Topology Admit Handler" podUID="b69c12c6-19df-47c3-8096-02b70e53bbd1" podNamespace="default" podName="busybox"
	Nov 02 13:35:07 old-k8s-version-054159 kubelet[1408]: I1102 13:35:07.775212    1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6j4k\" (UniqueName: \"kubernetes.io/projected/b69c12c6-19df-47c3-8096-02b70e53bbd1-kube-api-access-c6j4k\") pod \"busybox\" (UID: \"b69c12c6-19df-47c3-8096-02b70e53bbd1\") " pod="default/busybox"
	Nov 02 13:35:09 old-k8s-version-054159 kubelet[1408]: I1102 13:35:09.576930    1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.188416757 podCreationTimestamp="2025-11-02 13:35:07 +0000 UTC" firstStartedPulling="2025-11-02 13:35:07.984046894 +0000 UTC m=+29.602009232" lastFinishedPulling="2025-11-02 13:35:09.372503251 +0000 UTC m=+30.990465597" observedRunningTime="2025-11-02 13:35:09.576678047 +0000 UTC m=+31.194640395" watchObservedRunningTime="2025-11-02 13:35:09.576873122 +0000 UTC m=+31.194835468"
	
	
	==> storage-provisioner [bb2bb52a057a12aa1ed895ed1af8a1bc6dcb63a13b4aead18b3b3b2634d07455] <==
	I1102 13:35:04.808932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:35:04.819202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:35:04.819291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 13:35:04.828275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:35:04.828357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"299e1231-1111-4143-bceb-5c3455b6c833", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054159_6e922066-7806-4b7a-a552-3aa0fb3a6a2d became leader
	I1102 13:35:04.828473       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_6e922066-7806-4b7a-a552-3aa0fb3a6a2d!
	I1102 13:35:04.929605       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_6e922066-7806-4b7a-a552-3aa0fb3a6a2d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.004198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-978795 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-978795 describe deploy/metrics-server -n kube-system: exit status 1 (64.743279ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-978795 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-978795
helpers_test.go:243: (dbg) docker inspect no-preload-978795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	        "Created": "2025-11-02T13:35:24.534535218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299937,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:35:24.570810112Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e-json.log",
	        "Name": "/no-preload-978795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-978795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-978795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	                "LowerDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-978795",
	                "Source": "/var/lib/docker/volumes/no-preload-978795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-978795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-978795",
	                "name.minikube.sigs.k8s.io": "no-preload-978795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e382d6debeabfaef05164acd1179bfe22e34cc68aeeef564889fb6145080f6e",
	            "SandboxKey": "/var/run/docker/netns/8e382d6debea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-978795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:69:34:79:cd:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11ed3231c38232a3af5735052e72b0c429b6b7e978e401e7b612ef36fc53303a",
	                    "EndpointID": "6ced800938e507e12a6df57f90ac52875d340955d63b5f894da3c464455d4c0e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-978795",
	                        "f2b4d88c9fa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-978795 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-978795 logs -n 25: (1.181491141s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-123357 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo docker system info                                                                                                                                 │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cri-dockerd --version                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo containerd config dump                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                        │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                        │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                          │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:36:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:36:05.435590  314692 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:05.435743  314692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:05.435755  314692 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:05.435762  314692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:05.436031  314692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:05.436677  314692 out.go:368] Setting JSON to false
	I1102 13:36:05.438502  314692 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4717,"bootTime":1762085848,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:36:05.438644  314692 start.go:143] virtualization: kvm guest
	I1102 13:36:05.441073  314692 out.go:179] * [default-k8s-diff-port-538419] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:36:05.442523  314692 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:36:05.442625  314692 notify.go:221] Checking for updates...
	I1102 13:36:05.444929  314692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:36:05.446245  314692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:05.447444  314692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:36:05.448643  314692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:36:05.449853  314692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:36:05.451841  314692 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:05.451995  314692 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:05.452137  314692 config.go:182] Loaded profile config "old-k8s-version-054159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 13:36:05.452265  314692 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:36:05.482704  314692 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:36:05.482885  314692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:05.557426  314692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:36:05.543426305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:05.557588  314692 docker.go:319] overlay module found
	I1102 13:36:05.560384  314692 out.go:179] * Using the docker driver based on user configuration
	I1102 13:36:05.561760  314692 start.go:309] selected driver: docker
	I1102 13:36:05.561779  314692 start.go:930] validating driver "docker" against <nil>
	I1102 13:36:05.561795  314692 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:36:05.562636  314692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:05.638262  314692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:36:05.626506486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:05.638476  314692 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:36:05.638823  314692 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:05.641773  314692 out.go:179] * Using Docker driver with root privileges
	I1102 13:36:05.643726  314692 cni.go:84] Creating CNI manager for ""
	I1102 13:36:05.643812  314692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:05.643827  314692 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:36:05.643913  314692 start.go:353] cluster config:
	{Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:05.645664  314692 out.go:179] * Starting "default-k8s-diff-port-538419" primary control-plane node in "default-k8s-diff-port-538419" cluster
	I1102 13:36:05.647119  314692 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:36:05.648460  314692 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:36:05.649712  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:05.649758  314692 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:36:05.649766  314692 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:36:05.649771  314692 cache.go:59] Caching tarball of preloaded images
	I1102 13:36:05.649861  314692 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:36:05.649876  314692 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:36:05.650007  314692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:36:05.650038  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json: {Name:mkd7944051edd60e9de4b9749b633bdc1f3cad40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:05.675200  314692 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:36:05.675231  314692 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:36:05.675251  314692 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:36:05.675286  314692 start.go:360] acquireMachinesLock for default-k8s-diff-port-538419: {Name:mkbdbe3f57bcc3a77e6d88e56b57947595d7b695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:05.675387  314692 start.go:364] duration metric: took 79.928µs to acquireMachinesLock for "default-k8s-diff-port-538419"
	I1102 13:36:05.675415  314692 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:05.675516  314692 start.go:125] createHost starting for "" (driver="docker")
	W1102 13:36:03.231136  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:05.236180  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:02.313984  309290 out.go:252]   - Generating certificates and keys ...
	I1102 13:36:02.314106  309290 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:36:02.314234  309290 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:36:03.069894  309290 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:36:03.245364  309290 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:36:03.895960  309290 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:36:04.128345  309290 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:36:04.472099  309290 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:36:04.472259  309290 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-748183 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:36:04.538713  309290 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:36:04.538906  309290 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-748183 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:36:04.580087  309290 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:36:05.226663  309290 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:36:05.290451  309290 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:36:05.290556  309290 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:36:05.641708  309290 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:36:05.960162  309290 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:36:06.126342  309290 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:36:06.327269  309290 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:36:06.773625  309290 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:36:06.774998  309290 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:36:06.780246  309290 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1102 13:36:04.157270  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	W1102 13:36:06.657624  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	I1102 13:36:05.678199  314692 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 13:36:05.678459  314692 start.go:159] libmachine.API.Create for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:36:05.678519  314692 client.go:173] LocalClient.Create starting
	I1102 13:36:05.678601  314692 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 13:36:05.678641  314692 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:05.678658  314692 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:05.678714  314692 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 13:36:05.678733  314692 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:05.678742  314692 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:05.679050  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:36:05.696126  314692 cli_runner.go:211] docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:36:05.696199  314692 network_create.go:284] running [docker network inspect default-k8s-diff-port-538419] to gather additional debugging logs...
	I1102 13:36:05.696222  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419
	W1102 13:36:05.715284  314692 cli_runner.go:211] docker network inspect default-k8s-diff-port-538419 returned with exit code 1
	I1102 13:36:05.715325  314692 network_create.go:287] error running [docker network inspect default-k8s-diff-port-538419]: docker network inspect default-k8s-diff-port-538419: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-538419 not found
	I1102 13:36:05.715351  314692 network_create.go:289] output of [docker network inspect default-k8s-diff-port-538419]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-538419 not found
	
	** /stderr **
	I1102 13:36:05.715499  314692 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:05.737307  314692 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
	I1102 13:36:05.738188  314692 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6e64be95e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ec:8c:d9:e4:62} reservation:<nil>}
	I1102 13:36:05.739220  314692 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce0c0e777855 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:03:0f:01:14:50} reservation:<nil>}
	I1102 13:36:05.739841  314692 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4ae33975e63c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:e4:ca:ff:f5:a7} reservation:<nil>}
	I1102 13:36:05.740727  314692 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef67f0}
	I1102 13:36:05.740760  314692 network_create.go:124] attempt to create docker network default-k8s-diff-port-538419 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 13:36:05.740823  314692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 default-k8s-diff-port-538419
	I1102 13:36:05.814279  314692 network_create.go:108] docker network default-k8s-diff-port-538419 192.168.85.0/24 created
	I1102 13:36:05.814311  314692 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-538419" container
	I1102 13:36:05.814367  314692 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:36:05.837693  314692 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-538419 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:36:05.859791  314692 oci.go:103] Successfully created a docker volume default-k8s-diff-port-538419
	I1102 13:36:05.859897  314692 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-538419-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --entrypoint /usr/bin/test -v default-k8s-diff-port-538419:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:36:06.881907  314692 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-538419-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --entrypoint /usr/bin/test -v default-k8s-diff-port-538419:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.021960601s)
	I1102 13:36:06.881936  314692 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-538419
	I1102 13:36:06.881985  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:06.882013  314692 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:36:06.882080  314692 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538419:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:36:10.275316  314692 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538419:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.393191884s)
	I1102 13:36:10.275358  314692 kic.go:203] duration metric: took 3.393343348s to extract preloaded images to volume ...
	W1102 13:36:10.275432  314692 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 13:36:10.275463  314692 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 13:36:10.275506  314692 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:36:10.335732  314692 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-538419 --name default-k8s-diff-port-538419 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --network default-k8s-diff-port-538419 --ip 192.168.85.2 --volume default-k8s-diff-port-538419:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	W1102 13:36:07.731890  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:09.844195  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:06.781931  309290 out.go:252]   - Booting up control plane ...
	I1102 13:36:06.782085  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:36:06.782226  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:36:06.783116  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:36:06.801117  309290 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:36:06.801274  309290 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:36:06.808806  309290 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:36:06.809054  309290 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:36:06.809131  309290 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:36:06.934754  309290 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:36:06.934882  309290 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:36:07.935656  309290 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0010106s
	I1102 13:36:07.938500  309290 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:36:07.938641  309290 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1102 13:36:07.938767  309290 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:36:07.938871  309290 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:36:11.261013  309290 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.32231961s
	I1102 13:36:11.843760  309290 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.905255384s
	I1102 13:36:13.439964  309290 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501297656s
	I1102 13:36:13.450594  309290 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:36:13.459158  309290 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:36:13.467549  309290 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:36:13.467853  309290 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-748183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:36:13.475110  309290 kubeadm.go:319] [bootstrap-token] Using token: 3hyvdp.6f6epf4ijrgc86v7
	W1102 13:36:08.668309  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	W1102 13:36:11.157318  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	I1102 13:36:12.656920  299350 node_ready.go:49] node "no-preload-978795" is "Ready"
	I1102 13:36:12.656948  299350 node_ready.go:38] duration metric: took 12.50324534s for node "no-preload-978795" to be "Ready" ...
	I1102 13:36:12.656965  299350 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:36:12.657024  299350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:36:12.668969  299350 api_server.go:72] duration metric: took 12.950821403s to wait for apiserver process to appear ...
	I1102 13:36:12.669000  299350 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:36:12.669028  299350 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:36:12.673841  299350 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1102 13:36:12.674694  299350 api_server.go:141] control plane version: v1.34.1
	I1102 13:36:12.674720  299350 api_server.go:131] duration metric: took 5.712193ms to wait for apiserver health ...
	I1102 13:36:12.674729  299350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:36:12.677381  299350 system_pods.go:59] 8 kube-system pods found
	I1102 13:36:12.677409  299350 system_pods.go:61] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.677415  299350 system_pods.go:61] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.677420  299350 system_pods.go:61] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.677424  299350 system_pods.go:61] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.677431  299350 system_pods.go:61] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.677436  299350 system_pods.go:61] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.677439  299350 system_pods.go:61] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.677443  299350 system_pods.go:61] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.677448  299350 system_pods.go:74] duration metric: took 2.714415ms to wait for pod list to return data ...
	I1102 13:36:12.677454  299350 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:36:12.679625  299350 default_sa.go:45] found service account: "default"
	I1102 13:36:12.679642  299350 default_sa.go:55] duration metric: took 2.182973ms for default service account to be created ...
	I1102 13:36:12.679651  299350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:36:12.682139  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:12.682173  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.682182  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.682197  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.682204  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.682214  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.682222  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.682229  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.682241  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.682264  299350 retry.go:31] will retry after 248.72906ms: missing components: kube-dns
	I1102 13:36:12.935533  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:12.935591  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.935603  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.935612  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.935618  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.935624  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.935629  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.935634  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.935663  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.935688  299350 retry.go:31] will retry after 385.867833ms: missing components: kube-dns
	I1102 13:36:13.326168  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:13.326202  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Running
	I1102 13:36:13.326211  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:13.326216  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:13.326222  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:13.326227  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:13.326232  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:13.326237  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:13.326244  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Running
	I1102 13:36:13.326266  299350 system_pods.go:126] duration metric: took 646.609426ms to wait for k8s-apps to be running ...
	I1102 13:36:13.326274  299350 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:36:13.326325  299350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:13.339761  299350 system_svc.go:56] duration metric: took 13.478931ms WaitForService to wait for kubelet
	I1102 13:36:13.339793  299350 kubeadm.go:587] duration metric: took 13.621650881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:13.339815  299350 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:36:13.342730  299350 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:36:13.342755  299350 node_conditions.go:123] node cpu capacity is 8
	I1102 13:36:13.342771  299350 node_conditions.go:105] duration metric: took 2.950985ms to run NodePressure ...
	I1102 13:36:13.342785  299350 start.go:242] waiting for startup goroutines ...
	I1102 13:36:13.342794  299350 start.go:247] waiting for cluster config update ...
	I1102 13:36:13.342810  299350 start.go:256] writing updated cluster config ...
	I1102 13:36:13.343084  299350 ssh_runner.go:195] Run: rm -f paused
	I1102 13:36:13.346918  299350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:13.349803  299350 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.354285  299350 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:36:13.354304  299350 pod_ready.go:86] duration metric: took 4.475697ms for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.356253  299350 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.359951  299350 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:36:13.359969  299350 pod_ready.go:86] duration metric: took 3.697797ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.361639  299350 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.364958  299350 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:36:13.364981  299350 pod_ready.go:86] duration metric: took 3.323803ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.366580  299350 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.476104  309290 out.go:252]   - Configuring RBAC rules ...
	I1102 13:36:13.476262  309290 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:36:13.481572  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:36:13.486194  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:36:13.488347  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:36:13.490661  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:36:13.492848  309290 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:36:13.847654  309290 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:36:14.264087  309290 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:36:14.845928  309290 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:36:14.846937  309290 kubeadm.go:319] 
	I1102 13:36:14.847017  309290 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:36:14.847029  309290 kubeadm.go:319] 
	I1102 13:36:14.847091  309290 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:36:14.847099  309290 kubeadm.go:319] 
	I1102 13:36:14.847119  309290 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:36:14.847177  309290 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:36:14.847252  309290 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:36:14.847271  309290 kubeadm.go:319] 
	I1102 13:36:14.847314  309290 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:36:14.847320  309290 kubeadm.go:319] 
	I1102 13:36:14.847363  309290 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:36:14.847370  309290 kubeadm.go:319] 
	I1102 13:36:14.847411  309290 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:36:14.847510  309290 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:36:14.847637  309290 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:36:14.847663  309290 kubeadm.go:319] 
	I1102 13:36:14.847795  309290 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:36:14.847901  309290 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:36:14.847910  309290 kubeadm.go:319] 
	I1102 13:36:14.848013  309290 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3hyvdp.6f6epf4ijrgc86v7 \
	I1102 13:36:14.848179  309290 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:36:14.848214  309290 kubeadm.go:319] 	--control-plane 
	I1102 13:36:14.848231  309290 kubeadm.go:319] 
	I1102 13:36:14.848347  309290 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:36:14.848358  309290 kubeadm.go:319] 
	I1102 13:36:14.848471  309290 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3hyvdp.6f6epf4ijrgc86v7 \
	I1102 13:36:14.848632  309290 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:36:14.852045  309290 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:36:14.852139  309290 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:36:14.852161  309290 cni.go:84] Creating CNI manager for ""
	I1102 13:36:14.852170  309290 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:14.854260  309290 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:36:13.750618  299350 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:36:13.750648  299350 pod_ready.go:86] duration metric: took 384.047554ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.951214  299350 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.351367  299350 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:36:14.351393  299350 pod_ready.go:86] duration metric: took 400.155205ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.550498  299350 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.950992  299350 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:36:14.951024  299350 pod_ready.go:86] duration metric: took 400.49724ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.951045  299350 pod_ready.go:40] duration metric: took 1.604102284s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:15.004484  299350 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:36:15.007265  299350 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:36:10.615016  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Running}}
	I1102 13:36:10.636417  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.654048  314692 cli_runner.go:164] Run: docker exec default-k8s-diff-port-538419 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:36:10.699729  314692 oci.go:144] the created container "default-k8s-diff-port-538419" has a running status.
	I1102 13:36:10.699770  314692 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa...
	I1102 13:36:10.762874  314692 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:36:10.790352  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.807665  314692 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:36:10.807686  314692 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-538419 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:36:10.848037  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.869592  314692 machine.go:94] provisionDockerMachine start ...
	I1102 13:36:10.869697  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:10.897052  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:10.897380  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:10.897399  314692 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:36:10.898419  314692 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47708->127.0.0.1:33110: read: connection reset by peer
	I1102 13:36:14.044265  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:36:14.044295  314692 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:36:14.044389  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.065074  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.065363  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.065385  314692 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:36:14.227057  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:36:14.227141  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.248649  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.248931  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.248961  314692 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:36:14.393279  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:36:14.393304  314692 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:36:14.393351  314692 ubuntu.go:190] setting up certificates
	I1102 13:36:14.393360  314692 provision.go:84] configureAuth start
	I1102 13:36:14.393407  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:14.412456  314692 provision.go:143] copyHostCerts
	I1102 13:36:14.412522  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:36:14.412536  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:36:14.412630  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:36:14.412777  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:36:14.412794  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:36:14.412843  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:36:14.412940  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:36:14.412971  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:36:14.413013  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:36:14.413094  314692 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:36:14.581836  314692 provision.go:177] copyRemoteCerts
	I1102 13:36:14.581894  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:36:14.581927  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.601054  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:14.702033  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:36:14.722483  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:36:14.740538  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:36:14.758816  314692 provision.go:87] duration metric: took 365.441182ms to configureAuth
	I1102 13:36:14.758850  314692 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:36:14.759049  314692 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:14.759176  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.777002  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.777225  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.777242  314692 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:36:15.055729  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:36:15.055754  314692 machine.go:97] duration metric: took 4.186138313s to provisionDockerMachine
	I1102 13:36:15.055767  314692 client.go:176] duration metric: took 9.377238341s to LocalClient.Create
	I1102 13:36:15.055789  314692 start.go:167] duration metric: took 9.377330266s to libmachine.API.Create "default-k8s-diff-port-538419"
	I1102 13:36:15.055802  314692 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:36:15.055817  314692 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:36:15.055889  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:36:15.055938  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.077020  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:15.194616  314692 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:36:15.200392  314692 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:36:15.200430  314692 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:36:15.200443  314692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:36:15.200499  314692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:36:15.200657  314692 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:36:15.200803  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:36:15.211740  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:15.237974  314692 start.go:296] duration metric: took 182.156913ms for postStartSetup
	I1102 13:36:15.238391  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:15.259283  314692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:36:15.259631  314692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:36:15.259686  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.279422  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:15.376664  314692 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:36:15.381089  314692 start.go:128] duration metric: took 9.705559601s to createHost
	I1102 13:36:15.381118  314692 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 9.705716798s
	I1102 13:36:15.381184  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:15.398864  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:15.398944  314692 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:15.398959  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:15.398991  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:15.399048  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:15.399094  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:15.399152  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:15.399230  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:15.399282  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.416861  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	W1102 13:36:12.230844  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:14.730477  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:14.855544  309290 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:36:14.860124  309290 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:36:14.860144  309290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:36:14.873520  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:36:15.117704  309290 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:36:15.117868  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.117961  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-748183 minikube.k8s.io/updated_at=2025_11_02T13_36_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=embed-certs-748183 minikube.k8s.io/primary=true
	I1102 13:36:15.131951  309290 ops.go:34] apiserver oom_adj: -16
	I1102 13:36:15.218322  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.719272  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:16.218674  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.530719  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:15.548152  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:15.566056  314692 ssh_runner.go:195] Run: openssl version
	I1102 13:36:15.572246  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:15.580874  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.584764  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.584830  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.620546  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:15.629741  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:15.638338  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.642268  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.642313  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.678223  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:15.687210  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:15.695816  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.699547  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.699614  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.747068  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:15.756607  314692 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:36:15.760925  314692 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:36:15.764886  314692 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:36:15.765004  314692 ssh_runner.go:195] Run: cat /version.json
	I1102 13:36:15.825446  314692 ssh_runner.go:195] Run: systemctl --version
	I1102 13:36:15.832220  314692 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:36:15.866819  314692 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:36:15.871788  314692 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:36:15.871857  314692 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:36:15.897972  314692 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 13:36:15.897992  314692 start.go:496] detecting cgroup driver to use...
	I1102 13:36:15.898017  314692 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:36:15.898053  314692 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:36:15.913685  314692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:36:15.926165  314692 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:36:15.926219  314692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:36:15.942756  314692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:36:15.960041  314692 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:36:16.049211  314692 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:36:16.140550  314692 docker.go:234] disabling docker service ...
	I1102 13:36:16.140652  314692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:36:16.158746  314692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:36:16.172119  314692 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:36:16.257545  314692 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:36:16.342957  314692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:36:16.356102  314692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:36:16.370528  314692 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:36:16.370607  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.380612  314692 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:36:16.380679  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.389314  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.398142  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.406653  314692 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:36:16.415009  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.424040  314692 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.437426  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.446180  314692 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:36:16.453356  314692 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:36:16.460870  314692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:16.542362  314692 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:36:16.650720  314692 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:36:16.650782  314692 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:36:16.654947  314692 start.go:564] Will wait 60s for crictl version
	I1102 13:36:16.654995  314692 ssh_runner.go:195] Run: which crictl
	I1102 13:36:16.658580  314692 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:36:16.682711  314692 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:36:16.682789  314692 ssh_runner.go:195] Run: crio --version
	I1102 13:36:16.711010  314692 ssh_runner.go:195] Run: crio --version
	I1102 13:36:16.743545  314692 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:36:16.745730  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:16.769441  314692 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:36:16.774346  314692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:16.788062  314692 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:36:16.788175  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:16.788219  314692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:16.829012  314692 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:16.829038  314692 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:36:16.829095  314692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:16.861101  314692 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:16.861128  314692 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:36:16.861137  314692 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:36:16.861232  314692 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:36:16.861306  314692 ssh_runner.go:195] Run: crio config
	I1102 13:36:16.911369  314692 cni.go:84] Creating CNI manager for ""
	I1102 13:36:16.911403  314692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:16.911419  314692 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:36:16.911451  314692 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:36:16.911656  314692 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:36:16.911714  314692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:36:16.920055  314692 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:36:16.920130  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:36:16.928948  314692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:36:16.943385  314692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:36:16.958903  314692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:36:16.972807  314692 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:36:16.976645  314692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:16.988026  314692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:17.070379  314692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:17.095169  314692 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:36:17.095194  314692 certs.go:195] generating shared ca certs ...
	I1102 13:36:17.095216  314692 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.095404  314692 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:36:17.095471  314692 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:36:17.095486  314692 certs.go:257] generating profile certs ...
	I1102 13:36:17.095574  314692 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:36:17.095593  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt with IP's: []
	I1102 13:36:17.314527  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt ...
	I1102 13:36:17.314554  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt: {Name:mk33b2e40a938c6fe809d4a8e985371cc5806071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.314759  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key ...
	I1102 13:36:17.314780  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key: {Name:mkd72b526d33383930c74a87122742dae4f9c1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.314876  314692 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:36:17.314901  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1102 13:36:17.676789  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d ...
	I1102 13:36:17.676816  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d: {Name:mkd6b2fb9849a4b5918b1a6c11ed704b30cbfc7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.676982  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d ...
	I1102 13:36:17.676996  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d: {Name:mk943eea6a4f97dc7db9628f9cf8c6ad9a1a0ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.677066  314692 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt
	I1102 13:36:17.677142  314692 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key
	I1102 13:36:17.677195  314692 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:36:17.677211  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt with IP's: []
	I1102 13:36:17.795777  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt ...
	I1102 13:36:17.795804  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt: {Name:mk4445f5eac21c77379a3af06cd000490a3c92e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.795964  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key ...
	I1102 13:36:17.795978  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key: {Name:mk2cd1fcae7ce412117f843d403ccb7295c5d3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.796162  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:17.796197  314692 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:17.796207  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:17.796227  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:17.796248  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:17.796268  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:17.796308  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:17.796818  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:36:17.815829  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:36:17.833605  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:36:17.851062  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:36:17.869100  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:36:17.886289  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:36:17.903502  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:36:17.921328  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:36:17.938765  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:17.955853  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:17.973411  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:17.991438  314692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:36:18.004750  314692 ssh_runner.go:195] Run: openssl version
	I1102 13:36:18.011970  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:18.021125  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.025417  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.025470  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.062923  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:18.071543  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:18.080340  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.084122  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.084184  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.120469  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:18.129115  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:18.137634  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.141539  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.141602  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.178120  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:18.186380  314692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:36:18.190139  314692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:36:18.190202  314692 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:18.190263  314692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:36:18.190314  314692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:36:18.219699  314692 cri.go:89] found id: ""
	I1102 13:36:18.219766  314692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:36:18.229163  314692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:36:18.237732  314692 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:36:18.237794  314692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:36:18.246294  314692 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:36:18.246318  314692 kubeadm.go:158] found existing configuration files:
	
	I1102 13:36:18.246380  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1102 13:36:18.254067  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:36:18.254132  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:36:18.262448  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1102 13:36:18.271074  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:36:18.271134  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:36:18.279732  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1102 13:36:18.288400  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:36:18.288479  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:36:18.295833  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1102 13:36:18.303350  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:36:18.303425  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:36:18.311139  314692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:36:18.350216  314692 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:36:18.350300  314692 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:36:18.372987  314692 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:36:18.373101  314692 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:36:18.373155  314692 kubeadm.go:319] OS: Linux
	I1102 13:36:18.373229  314692 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:36:18.373304  314692 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:36:18.373389  314692 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:36:18.373468  314692 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:36:18.373533  314692 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:36:18.373617  314692 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:36:18.373718  314692 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:36:18.373816  314692 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:36:18.432435  314692 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:36:18.432557  314692 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:36:18.432690  314692 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:36:18.439861  314692 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:36:16.719393  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:17.218927  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:17.718523  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:18.218633  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:18.718482  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.219410  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.718732  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.800901  309290 kubeadm.go:1114] duration metric: took 4.683077814s to wait for elevateKubeSystemPrivileges
	I1102 13:36:19.800946  309290 kubeadm.go:403] duration metric: took 17.795902376s to StartCluster
	I1102 13:36:19.800968  309290 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:19.801034  309290 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:19.803196  309290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:19.803463  309290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:36:19.803485  309290 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:19.803546  309290 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:36:19.803663  309290 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-748183"
	I1102 13:36:19.803675  309290 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:19.803681  309290 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-748183"
	I1102 13:36:19.803696  309290 addons.go:70] Setting default-storageclass=true in profile "embed-certs-748183"
	I1102 13:36:19.803725  309290 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:36:19.803731  309290 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-748183"
	I1102 13:36:19.804096  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.804273  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.805888  309290 out.go:179] * Verifying Kubernetes components...
	I1102 13:36:19.807211  309290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:19.828857  309290 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:36:19.829978  309290 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:19.829998  309290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:36:19.830053  309290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:36:19.831353  309290 addons.go:239] Setting addon default-storageclass=true in "embed-certs-748183"
	I1102 13:36:19.831395  309290 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:36:19.831872  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.860998  309290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:36:19.861418  309290 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:19.861453  309290 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:36:19.861517  309290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:36:19.885111  309290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:36:19.900279  309290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:36:19.943668  309290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:19.986109  309290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:19.998900  309290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:20.079731  309290 start.go:1013] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1102 13:36:20.081284  309290 node_ready.go:35] waiting up to 6m0s for node "embed-certs-748183" to be "Ready" ...
	I1102 13:36:20.304961  309290 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:36:18.442017  314692 out.go:252]   - Generating certificates and keys ...
	I1102 13:36:18.442121  314692 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:36:18.442224  314692 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:36:18.730940  314692 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:36:18.858951  314692 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:36:19.384727  314692 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:36:19.412774  314692 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:36:19.902782  314692 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:36:19.902998  314692 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-538419 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 13:36:20.222663  314692 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:36:20.222887  314692 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-538419 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1102 13:36:16.730904  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:18.740498  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:20.305958  309290 addons.go:515] duration metric: took 502.411848ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:36:20.585139  309290 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-748183" context rescaled to 1 replicas
	I1102 13:36:20.484633  314692 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:36:20.576482  314692 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:36:20.683047  314692 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:36:20.683224  314692 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:36:21.190668  314692 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:36:21.263301  314692 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:36:22.036334  314692 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:36:22.143162  314692 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:36:22.245254  314692 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:36:22.245836  314692 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:36:22.249649  314692 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 02 13:36:12 no-preload-978795 crio[794]: time="2025-11-02T13:36:12.980011403Z" level=info msg="Starting container: 4ec63f4d00f3b2dc26d01558f27e031b58dc8d3c3fa6f509d6dfa7787366bb54" id=18d95b79-5e5d-468c-96c7-fc90a8093caf name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:12 no-preload-978795 crio[794]: time="2025-11-02T13:36:12.982142824Z" level=info msg="Started container" PID=2942 containerID=4ec63f4d00f3b2dc26d01558f27e031b58dc8d3c3fa6f509d6dfa7787366bb54 description=kube-system/coredns-66bc5c9577-2dtpc/coredns id=18d95b79-5e5d-468c-96c7-fc90a8093caf name=/runtime.v1.RuntimeService/StartContainer sandboxID=05808130569a7b880ae4d2d64e5c121994ec0a666641419faf08fbe9cacda1c1
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.494800941Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4c742dc7-ff2a-4fb1-8013-e0707328bfae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.494892786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.500043043Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a22e454b4431498d9a224a51baf025b67a4d708ede05d66407e8a8bd031c876f UID:a73e312f-e302-474f-9e60-484d384e49da NetNS:/var/run/netns/9c4583c8-d4cb-4f2a-85a1-3b21539a3f75 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000690d08}] Aliases:map[]}"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.500081906Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.509980309Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a22e454b4431498d9a224a51baf025b67a4d708ede05d66407e8a8bd031c876f UID:a73e312f-e302-474f-9e60-484d384e49da NetNS:/var/run/netns/9c4583c8-d4cb-4f2a-85a1-3b21539a3f75 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000690d08}] Aliases:map[]}"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.510218092Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.51116057Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.512007511Z" level=info msg="Ran pod sandbox a22e454b4431498d9a224a51baf025b67a4d708ede05d66407e8a8bd031c876f with infra container: default/busybox/POD" id=4c742dc7-ff2a-4fb1-8013-e0707328bfae name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.513189083Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=15c38d9b-1653-4892-8bca-2a40044e7bdc name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.513338368Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=15c38d9b-1653-4892-8bca-2a40044e7bdc name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.513383004Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=15c38d9b-1653-4892-8bca-2a40044e7bdc name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.513934247Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c221cbed-516a-45ee-a7f6-86f8a6891bb3 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:15 no-preload-978795 crio[794]: time="2025-11-02T13:36:15.515471296Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.897609188Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c221cbed-516a-45ee-a7f6-86f8a6891bb3 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.898195416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ffaad326-07e7-44fc-a4c0-e3fdcf41a1fe name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.899722213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3e93a8e-2ebe-4633-9d4e-5f8168c226a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.902923752Z" level=info msg="Creating container: default/busybox/busybox" id=d3033f43-d9fd-44fc-bd08-65e15e927592 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.903056285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.907784026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.908369431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.934022158Z" level=info msg="Created container a5520b4433d9765283920b5aa5b0dc951ac3fb9ae0c6434849d69c26605b1f12: default/busybox/busybox" id=d3033f43-d9fd-44fc-bd08-65e15e927592 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.934704132Z" level=info msg="Starting container: a5520b4433d9765283920b5aa5b0dc951ac3fb9ae0c6434849d69c26605b1f12" id=fc552490-b1b4-4cf4-8562-fd5469a90bab name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:16 no-preload-978795 crio[794]: time="2025-11-02T13:36:16.936770035Z" level=info msg="Started container" PID=3016 containerID=a5520b4433d9765283920b5aa5b0dc951ac3fb9ae0c6434849d69c26605b1f12 description=default/busybox/busybox id=fc552490-b1b4-4cf4-8562-fd5469a90bab name=/runtime.v1.RuntimeService/StartContainer sandboxID=a22e454b4431498d9a224a51baf025b67a4d708ede05d66407e8a8bd031c876f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a5520b4433d97       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a22e454b44314       busybox                                     default
	4ec63f4d00f3b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   05808130569a7       coredns-66bc5c9577-2dtpc                    kube-system
	d27e05edb7900       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   821662e465b0c       storage-provisioner                         kube-system
	ef992907a50fe       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   33ea7e2451198       kindnet-d8n4x                               kube-system
	8d314967dee43       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   0aeaf8686eef5       kube-proxy-rmkmd                            kube-system
	1c6ccf5eeb246       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   599e89a80ee14       kube-scheduler-no-preload-978795            kube-system
	93e251e3880b0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   c5adec25a13b4       kube-controller-manager-no-preload-978795   kube-system
	ff2c3dbb00102       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   e12fa655b1da2       kube-apiserver-no-preload-978795            kube-system
	e9743386f7b83       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   a9001eebbf059       etcd-no-preload-978795                      kube-system
	
	
	==> coredns [4ec63f4d00f3b2dc26d01558f27e031b58dc8d3c3fa6f509d6dfa7787366bb54] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49971 - 28400 "HINFO IN 1618073768310791439.2734622551849198439. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016768188s
	
	
	==> describe nodes <==
	Name:               no-preload-978795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-978795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-978795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:35:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-978795
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:36:24 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:36:24 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:36:24 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:36:24 +0000   Sun, 02 Nov 2025 13:36:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-978795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                886d43f2-0cc9-4abe-b8a0-71a0f502a9fe
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-2dtpc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-978795                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-d8n4x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-978795             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-978795    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-rmkmd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-978795             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-978795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-978795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-978795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-978795 event: Registered Node no-preload-978795 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-978795 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [e9743386f7b839461f72fec3f801bb36eaa32141e2d275bf0c97a954036af5d5] <==
	{"level":"info","ts":"2025-11-02T13:35:51.851101Z","caller":"traceutil/trace.go:172","msg":"trace[742069676] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"160.612336ms","start":"2025-11-02T13:35:51.690476Z","end":"2025-11-02T13:35:51.851088Z","steps":["trace[742069676] 'process raft request'  (duration: 160.583815ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:51.851126Z","caller":"traceutil/trace.go:172","msg":"trace[193906141] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"175.490979ms","start":"2025-11-02T13:35:51.675615Z","end":"2025-11-02T13:35:51.851106Z","steps":["trace[193906141] 'process raft request'  (duration: 175.321438ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:51.851229Z","caller":"traceutil/trace.go:172","msg":"trace[1337276268] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"161.709118ms","start":"2025-11-02T13:35:51.689513Z","end":"2025-11-02T13:35:51.851222Z","steps":["trace[1337276268] 'process raft request'  (duration: 161.512509ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:51.851264Z","caller":"traceutil/trace.go:172","msg":"trace[917179653] transaction","detail":"{read_only:false; number_of_response:0; response_revision:10; }","duration":"174.281557ms","start":"2025-11-02T13:35:51.676974Z","end":"2025-11-02T13:35:51.851256Z","steps":["trace[917179653] 'process raft request'  (duration: 174.032321ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:35:51.907269Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.709371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-02T13:35:51.907377Z","caller":"traceutil/trace.go:172","msg":"trace[82284163] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:12; }","duration":"115.840711ms","start":"2025-11-02T13:35:51.791518Z","end":"2025-11-02T13:35:51.907358Z","steps":["trace[82284163] 'agreement among raft nodes before linearized reading'  (duration: 111.473914ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:51.907367Z","caller":"traceutil/trace.go:172","msg":"trace[336623468] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"114.216362ms","start":"2025-11-02T13:35:51.793128Z","end":"2025-11-02T13:35:51.907344Z","steps":["trace[336623468] 'process raft request'  (duration: 109.907565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:35:52.195179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.082154ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765901467804322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/prioritylevelconfigurations/leader-election\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/leader-election\" value_size:645 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-02T13:35:52.195388Z","caller":"traceutil/trace.go:172","msg":"trace[1464261703] transaction","detail":"{read_only:false; response_revision:25; number_of_response:1; }","duration":"278.565437ms","start":"2025-11-02T13:35:51.916779Z","end":"2025-11-02T13:35:52.195345Z","steps":["trace[1464261703] 'process raft request'  (duration: 154.831505ms)","trace[1464261703] 'compare'  (duration: 122.930913ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-02T13:35:52.195575Z","caller":"traceutil/trace.go:172","msg":"trace[1771666949] transaction","detail":"{read_only:false; response_revision:26; number_of_response:1; }","duration":"278.630516ms","start":"2025-11-02T13:35:51.916903Z","end":"2025-11-02T13:35:52.195534Z","steps":["trace[1771666949] 'process raft request'  (duration: 278.370726ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195674Z","caller":"traceutil/trace.go:172","msg":"trace[884787285] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"278.3193ms","start":"2025-11-02T13:35:51.917349Z","end":"2025-11-02T13:35:52.195669Z","steps":["trace[884787285] 'process raft request'  (duration: 278.191855ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195693Z","caller":"traceutil/trace.go:172","msg":"trace[952620637] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"278.384325ms","start":"2025-11-02T13:35:51.917284Z","end":"2025-11-02T13:35:52.195669Z","steps":["trace[952620637] 'process raft request'  (duration: 278.233265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195628Z","caller":"traceutil/trace.go:172","msg":"trace[255502718] transaction","detail":"{read_only:false; response_revision:29; number_of_response:1; }","duration":"278.374133ms","start":"2025-11-02T13:35:51.917245Z","end":"2025-11-02T13:35:52.195619Z","steps":["trace[255502718] 'process raft request'  (duration: 278.245839ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195780Z","caller":"traceutil/trace.go:172","msg":"trace[407392594] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"278.408781ms","start":"2025-11-02T13:35:51.917363Z","end":"2025-11-02T13:35:52.195772Z","steps":["trace[407392594] 'process raft request'  (duration: 278.228603ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195807Z","caller":"traceutil/trace.go:172","msg":"trace[572675515] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"272.949623ms","start":"2025-11-02T13:35:51.922848Z","end":"2025-11-02T13:35:52.195797Z","steps":["trace[572675515] 'process raft request'  (duration: 272.920908ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195628Z","caller":"traceutil/trace.go:172","msg":"trace[344543799] transaction","detail":"{read_only:false; response_revision:27; number_of_response:1; }","duration":"278.684047ms","start":"2025-11-02T13:35:51.916929Z","end":"2025-11-02T13:35:52.195613Z","steps":["trace[344543799] 'process raft request'  (duration: 278.462922ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195652Z","caller":"traceutil/trace.go:172","msg":"trace[487705598] transaction","detail":"{read_only:false; response_revision:28; number_of_response:1; }","duration":"278.54971ms","start":"2025-11-02T13:35:51.917097Z","end":"2025-11-02T13:35:52.195647Z","steps":["trace[487705598] 'process raft request'  (duration: 278.336254ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.195975Z","caller":"traceutil/trace.go:172","msg":"trace[285348559] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"277.137073ms","start":"2025-11-02T13:35:51.918828Z","end":"2025-11-02T13:35:52.195965Z","steps":["trace[285348559] 'process raft request'  (duration: 276.799814ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.308416Z","caller":"traceutil/trace.go:172","msg":"trace[1789551573] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"108.506389ms","start":"2025-11-02T13:35:52.199892Z","end":"2025-11-02T13:35:52.308399Z","steps":["trace[1789551573] 'process raft request'  (duration: 108.370554ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.308439Z","caller":"traceutil/trace.go:172","msg":"trace[407398359] transaction","detail":"{read_only:false; response_revision:40; number_of_response:1; }","duration":"108.291276ms","start":"2025-11-02T13:35:52.200137Z","end":"2025-11-02T13:35:52.308428Z","steps":["trace[407398359] 'process raft request'  (duration: 108.251526ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.308477Z","caller":"traceutil/trace.go:172","msg":"trace[2047715742] transaction","detail":"{read_only:false; response_revision:39; number_of_response:1; }","duration":"108.406019ms","start":"2025-11-02T13:35:52.200065Z","end":"2025-11-02T13:35:52.308471Z","steps":["trace[2047715742] 'process raft request'  (duration: 108.2709ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.308464Z","caller":"traceutil/trace.go:172","msg":"trace[462256073] transaction","detail":"{read_only:false; response_revision:38; number_of_response:1; }","duration":"108.450821ms","start":"2025-11-02T13:35:52.200002Z","end":"2025-11-02T13:35:52.308452Z","steps":["trace[462256073] 'process raft request'  (duration: 108.307451ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T13:35:52.308362Z","caller":"traceutil/trace.go:172","msg":"trace[1955944210] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"108.454232ms","start":"2025-11-02T13:35:52.199873Z","end":"2025-11-02T13:35:52.308327Z","steps":["trace[1955944210] 'process raft request'  (duration: 106.521282ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:35:52.593331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.595938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765901467804340 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/prioritylevelconfigurations/global-default\" mod_revision:0 > success:<request_put:<key:\"/registry/prioritylevelconfigurations/global-default\" value_size:645 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-02T13:35:52.593440Z","caller":"traceutil/trace.go:172","msg":"trace[1503773891] transaction","detail":"{read_only:false; response_revision:42; number_of_response:1; }","duration":"232.79531ms","start":"2025-11-02T13:35:52.360619Z","end":"2025-11-02T13:35:52.593414Z","steps":["trace[1503773891] 'process raft request'  (duration: 124.060306ms)","trace[1503773891] 'compare'  (duration: 108.476084ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:36:24 up  1:18,  0 user,  load average: 5.07, 4.21, 2.64
	Linux no-preload-978795 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ef992907a50fe88e4c43aaf109bdd8c7ca0cc8773aebfc7e1abf8c92a97da7e9] <==
	I1102 13:36:02.091285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:36:02.091609       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1102 13:36:02.091766       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:36:02.091788       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:36:02.091813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:36:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:36:02.300742       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:36:02.300785       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:36:02.300802       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:36:02.300969       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:36:02.701410       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:36:02.701434       1 metrics.go:72] Registering metrics
	I1102 13:36:02.701488       1 controller.go:711] "Syncing nftables rules"
	I1102 13:36:12.303752       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:36:12.303824       1 main.go:301] handling current node
	I1102 13:36:22.304705       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:36:22.304798       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff2c3dbb0010285ee34fbf4dc12fcc6bf59e8b2fdf9e34b3b66959320f0a28e2] <==
	I1102 13:35:51.675179       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:35:51.851988       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:35:51.852528       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:35:51.853150       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:35:51.911112       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:35:51.912034       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:35:52.598319       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:35:52.603854       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:35:52.603875       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:35:53.144723       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:35:53.194972       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:35:53.285706       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:35:53.294997       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1102 13:35:53.296405       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:35:53.303691       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:35:53.618413       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:35:54.179226       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:35:54.189883       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:35:54.197296       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:35:59.377293       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:35:59.382213       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:35:59.474855       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:35:59.474855       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:35:59.723026       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1102 13:36:23.309477       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:59786: use of closed network connection
	
	
	==> kube-controller-manager [93e251e3880b02cc0cc8e075ab4a8ac775b36142602a1e99a8f55e97a11d76ac] <==
	I1102 13:35:58.617806       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 13:35:58.617831       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:35:58.617848       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:35:58.617913       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:35:58.617930       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:35:58.617934       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:35:58.617939       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:35:58.617938       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:35:58.618008       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 13:35:58.618413       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:35:58.618510       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:35:58.618531       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:35:58.618727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:35:58.618772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:35:58.618897       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:35:58.619292       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:35:58.619309       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:35:58.619388       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-978795"
	I1102 13:35:58.619862       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 13:35:58.619410       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:35:58.623206       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:35:58.623971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:35:58.629885       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:35:58.653549       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:13.622750       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8d314967dee43b08792d95696168c019d457f906fd74b182ab42e28d7221d93c] <==
	I1102 13:36:00.067541       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:36:00.140402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:36:00.241111       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:36:00.241247       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1102 13:36:00.241437       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:36:00.313964       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:36:00.314084       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:36:00.361077       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:36:00.361722       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:36:00.361753       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:00.365103       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:36:00.365169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:36:00.365720       1 config.go:200] "Starting service config controller"
	I1102 13:36:00.365773       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:36:00.366192       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:36:00.366982       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:36:00.372226       1 config.go:309] "Starting node config controller"
	I1102 13:36:00.372250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:36:00.372257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:36:00.468658       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:36:00.469353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:36:00.471604       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1c6ccf5eeb246f7291db1a55b6335cbd0f67d3440815060e36d8246a7dfdee74] <==
	E1102 13:35:51.650345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:35:51.650360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:35:51.650483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:35:51.650471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:35:51.650501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:35:51.650585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:35:52.477390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:35:52.485608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:35:52.486394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:35:52.542140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:35:52.557455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:35:52.573750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:35:52.596075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:35:52.618774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:35:52.680752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 13:35:52.683858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:35:52.717384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:35:52.765123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 13:35:52.781720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:35:52.836966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:35:52.855361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:35:52.879740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:35:52.900698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:35:52.912957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1102 13:35:55.846226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:35:55 no-preload-978795 kubelet[2335]: I1102 13:35:55.045353    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-978795" podStartSLOduration=1.0453337 podStartE2EDuration="1.0453337s" podCreationTimestamp="2025-11-02 13:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:55.045204053 +0000 UTC m=+1.119023245" watchObservedRunningTime="2025-11-02 13:35:55.0453337 +0000 UTC m=+1.119152886"
	Nov 02 13:35:55 no-preload-978795 kubelet[2335]: I1102 13:35:55.066828    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-978795" podStartSLOduration=1.066807139 podStartE2EDuration="1.066807139s" podCreationTimestamp="2025-11-02 13:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:55.056552767 +0000 UTC m=+1.130371964" watchObservedRunningTime="2025-11-02 13:35:55.066807139 +0000 UTC m=+1.140626331"
	Nov 02 13:35:55 no-preload-978795 kubelet[2335]: I1102 13:35:55.066975    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-978795" podStartSLOduration=1.066965591 podStartE2EDuration="1.066965591s" podCreationTimestamp="2025-11-02 13:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:55.066784524 +0000 UTC m=+1.140603729" watchObservedRunningTime="2025-11-02 13:35:55.066965591 +0000 UTC m=+1.140784781"
	Nov 02 13:35:55 no-preload-978795 kubelet[2335]: I1102 13:35:55.087374    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-978795" podStartSLOduration=1.087354115 podStartE2EDuration="1.087354115s" podCreationTimestamp="2025-11-02 13:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:35:55.07670372 +0000 UTC m=+1.150522911" watchObservedRunningTime="2025-11-02 13:35:55.087354115 +0000 UTC m=+1.161173302"
	Nov 02 13:35:58 no-preload-978795 kubelet[2335]: I1102 13:35:58.639276    2335 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 13:35:58 no-preload-978795 kubelet[2335]: I1102 13:35:58.640103    2335 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528179    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c337ae93-812a-455b-bfe4-cdf49864936f-xtables-lock\") pod \"kindnet-d8n4x\" (UID: \"c337ae93-812a-455b-bfe4-cdf49864936f\") " pod="kube-system/kindnet-d8n4x"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528223    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98f26f5f-cb23-4052-a93d-328210c54a54-lib-modules\") pod \"kube-proxy-rmkmd\" (UID: \"98f26f5f-cb23-4052-a93d-328210c54a54\") " pod="kube-system/kube-proxy-rmkmd"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528253    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c337ae93-812a-455b-bfe4-cdf49864936f-lib-modules\") pod \"kindnet-d8n4x\" (UID: \"c337ae93-812a-455b-bfe4-cdf49864936f\") " pod="kube-system/kindnet-d8n4x"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528289    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c337ae93-812a-455b-bfe4-cdf49864936f-cni-cfg\") pod \"kindnet-d8n4x\" (UID: \"c337ae93-812a-455b-bfe4-cdf49864936f\") " pod="kube-system/kindnet-d8n4x"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528625    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98f26f5f-cb23-4052-a93d-328210c54a54-kube-proxy\") pod \"kube-proxy-rmkmd\" (UID: \"98f26f5f-cb23-4052-a93d-328210c54a54\") " pod="kube-system/kube-proxy-rmkmd"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528669    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98f26f5f-cb23-4052-a93d-328210c54a54-xtables-lock\") pod \"kube-proxy-rmkmd\" (UID: \"98f26f5f-cb23-4052-a93d-328210c54a54\") " pod="kube-system/kube-proxy-rmkmd"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528694    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dk5k\" (UniqueName: \"kubernetes.io/projected/98f26f5f-cb23-4052-a93d-328210c54a54-kube-api-access-2dk5k\") pod \"kube-proxy-rmkmd\" (UID: \"98f26f5f-cb23-4052-a93d-328210c54a54\") " pod="kube-system/kube-proxy-rmkmd"
	Nov 02 13:35:59 no-preload-978795 kubelet[2335]: I1102 13:35:59.528728    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvdt\" (UniqueName: \"kubernetes.io/projected/c337ae93-812a-455b-bfe4-cdf49864936f-kube-api-access-vcvdt\") pod \"kindnet-d8n4x\" (UID: \"c337ae93-812a-455b-bfe4-cdf49864936f\") " pod="kube-system/kindnet-d8n4x"
	Nov 02 13:36:00 no-preload-978795 kubelet[2335]: I1102 13:36:00.061670    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmkmd" podStartSLOduration=1.061637927 podStartE2EDuration="1.061637927s" podCreationTimestamp="2025-11-02 13:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:00.06147092 +0000 UTC m=+6.135290113" watchObservedRunningTime="2025-11-02 13:36:00.061637927 +0000 UTC m=+6.135457119"
	Nov 02 13:36:02 no-preload-978795 kubelet[2335]: I1102 13:36:02.063591    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-d8n4x" podStartSLOduration=1.113464021 podStartE2EDuration="3.063557092s" podCreationTimestamp="2025-11-02 13:35:59 +0000 UTC" firstStartedPulling="2025-11-02 13:35:59.84767976 +0000 UTC m=+5.921498943" lastFinishedPulling="2025-11-02 13:36:01.797772839 +0000 UTC m=+7.871592014" observedRunningTime="2025-11-02 13:36:02.063263641 +0000 UTC m=+8.137082837" watchObservedRunningTime="2025-11-02 13:36:02.063557092 +0000 UTC m=+8.137376283"
	Nov 02 13:36:12 no-preload-978795 kubelet[2335]: I1102 13:36:12.592202    2335 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 13:36:12 no-preload-978795 kubelet[2335]: I1102 13:36:12.730064    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1-tmp\") pod \"storage-provisioner\" (UID: \"0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:12 no-preload-978795 kubelet[2335]: I1102 13:36:12.730140    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrljs\" (UniqueName: \"kubernetes.io/projected/0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1-kube-api-access-vrljs\") pod \"storage-provisioner\" (UID: \"0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:12 no-preload-978795 kubelet[2335]: I1102 13:36:12.730263    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8533e5ca-78ef-4401-b967-018eceeb5321-config-volume\") pod \"coredns-66bc5c9577-2dtpc\" (UID: \"8533e5ca-78ef-4401-b967-018eceeb5321\") " pod="kube-system/coredns-66bc5c9577-2dtpc"
	Nov 02 13:36:12 no-preload-978795 kubelet[2335]: I1102 13:36:12.730323    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh6c\" (UniqueName: \"kubernetes.io/projected/8533e5ca-78ef-4401-b967-018eceeb5321-kube-api-access-sbh6c\") pod \"coredns-66bc5c9577-2dtpc\" (UID: \"8533e5ca-78ef-4401-b967-018eceeb5321\") " pod="kube-system/coredns-66bc5c9577-2dtpc"
	Nov 02 13:36:13 no-preload-978795 kubelet[2335]: I1102 13:36:13.094450    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2dtpc" podStartSLOduration=14.094430354 podStartE2EDuration="14.094430354s" podCreationTimestamp="2025-11-02 13:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:13.094068782 +0000 UTC m=+19.167887984" watchObservedRunningTime="2025-11-02 13:36:13.094430354 +0000 UTC m=+19.168249545"
	Nov 02 13:36:13 no-preload-978795 kubelet[2335]: I1102 13:36:13.105120    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.105099739 podStartE2EDuration="13.105099739s" podCreationTimestamp="2025-11-02 13:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:13.104833719 +0000 UTC m=+19.178652910" watchObservedRunningTime="2025-11-02 13:36:13.105099739 +0000 UTC m=+19.178918930"
	Nov 02 13:36:15 no-preload-978795 kubelet[2335]: I1102 13:36:15.245267    2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2rvj\" (UniqueName: \"kubernetes.io/projected/a73e312f-e302-474f-9e60-484d384e49da-kube-api-access-s2rvj\") pod \"busybox\" (UID: \"a73e312f-e302-474f-9e60-484d384e49da\") " pod="default/busybox"
	Nov 02 13:36:17 no-preload-978795 kubelet[2335]: I1102 13:36:17.107402    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.721834088 podStartE2EDuration="2.107379036s" podCreationTimestamp="2025-11-02 13:36:15 +0000 UTC" firstStartedPulling="2025-11-02 13:36:15.513607029 +0000 UTC m=+21.587426201" lastFinishedPulling="2025-11-02 13:36:16.89915198 +0000 UTC m=+22.972971149" observedRunningTime="2025-11-02 13:36:17.107369875 +0000 UTC m=+23.181189059" watchObservedRunningTime="2025-11-02 13:36:17.107379036 +0000 UTC m=+23.181198227"
	
	
	==> storage-provisioner [d27e05edb7900aca37758666d6d975d757ecf0c21b9053a50b7ce0ba20a3810c] <==
	I1102 13:36:12.987873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:36:12.996544       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:36:12.996613       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:36:12.999262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:13.005019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:13.005141       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:36:13.005276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f711ec74-9f5e-4d88-b29d-598bc126b1de", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-978795_28b1eb6f-3af6-450e-8f93-5eab2f24930e became leader
	I1102 13:36:13.005337       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-978795_28b1eb6f-3af6-450e-8f93-5eab2f24930e!
	W1102 13:36:13.008687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:13.016120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:13.106409       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-978795_28b1eb6f-3af6-450e-8f93-5eab2f24930e!
	W1102 13:36:15.019841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:15.025240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:17.029021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:17.034683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:19.038553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:19.043331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:21.046435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:21.050696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:23.054273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:23.059463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:25.062923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:25.067605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-978795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-054159 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-054159 --alsologtostderr -v=1: exit status 80 (2.495133781s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-054159 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:36:37.443216  320073 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:37.443485  320073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:37.443494  320073 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:37.443499  320073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:37.443807  320073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:37.444113  320073 out.go:368] Setting JSON to false
	I1102 13:36:37.444150  320073 mustload.go:66] Loading cluster: old-k8s-version-054159
	I1102 13:36:37.444477  320073 config.go:182] Loaded profile config "old-k8s-version-054159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 13:36:37.444852  320073 cli_runner.go:164] Run: docker container inspect old-k8s-version-054159 --format={{.State.Status}}
	I1102 13:36:37.462719  320073 host.go:66] Checking if "old-k8s-version-054159" exists ...
	I1102 13:36:37.463011  320073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:37.520092  320073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-02 13:36:37.509335843 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:37.520732  320073 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-054159 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:36:37.522544  320073 out.go:179] * Pausing node old-k8s-version-054159 ... 
	I1102 13:36:37.523609  320073 host.go:66] Checking if "old-k8s-version-054159" exists ...
	I1102 13:36:37.523864  320073 ssh_runner.go:195] Run: systemctl --version
	I1102 13:36:37.523933  320073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054159
	I1102 13:36:37.543517  320073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/old-k8s-version-054159/id_rsa Username:docker}
	I1102 13:36:37.644390  320073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:37.669159  320073 pause.go:52] kubelet running: true
	I1102 13:36:37.669264  320073 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:36:37.835302  320073 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:36:37.835410  320073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:36:37.901260  320073 cri.go:89] found id: "d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92"
	I1102 13:36:37.901280  320073 cri.go:89] found id: "0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf"
	I1102 13:36:37.901285  320073 cri.go:89] found id: "8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e"
	I1102 13:36:37.901288  320073 cri.go:89] found id: "d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	I1102 13:36:37.901292  320073 cri.go:89] found id: "7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038"
	I1102 13:36:37.901296  320073 cri.go:89] found id: "c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1"
	I1102 13:36:37.901299  320073 cri.go:89] found id: "497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0"
	I1102 13:36:37.901301  320073 cri.go:89] found id: "f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9"
	I1102 13:36:37.901304  320073 cri.go:89] found id: "2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386"
	I1102 13:36:37.901314  320073 cri.go:89] found id: "92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	I1102 13:36:37.901324  320073 cri.go:89] found id: "63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8"
	I1102 13:36:37.901328  320073 cri.go:89] found id: ""
	I1102 13:36:37.901375  320073 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:36:37.912929  320073 retry.go:31] will retry after 345.40882ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:37Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:36:38.259560  320073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:38.273025  320073 pause.go:52] kubelet running: false
	I1102 13:36:38.273078  320073 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:36:38.412516  320073 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:36:38.412631  320073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:36:38.479083  320073 cri.go:89] found id: "d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92"
	I1102 13:36:38.479105  320073 cri.go:89] found id: "0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf"
	I1102 13:36:38.479109  320073 cri.go:89] found id: "8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e"
	I1102 13:36:38.479117  320073 cri.go:89] found id: "d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	I1102 13:36:38.479120  320073 cri.go:89] found id: "7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038"
	I1102 13:36:38.479123  320073 cri.go:89] found id: "c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1"
	I1102 13:36:38.479125  320073 cri.go:89] found id: "497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0"
	I1102 13:36:38.479128  320073 cri.go:89] found id: "f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9"
	I1102 13:36:38.479130  320073 cri.go:89] found id: "2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386"
	I1102 13:36:38.479140  320073 cri.go:89] found id: "92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	I1102 13:36:38.479143  320073 cri.go:89] found id: "63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8"
	I1102 13:36:38.479145  320073 cri.go:89] found id: ""
	I1102 13:36:38.479180  320073 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:36:38.491087  320073 retry.go:31] will retry after 234.301676ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:36:38.725523  320073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:38.738835  320073 pause.go:52] kubelet running: false
	I1102 13:36:38.738897  320073 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:36:38.881626  320073 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:36:38.881713  320073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:36:38.947880  320073 cri.go:89] found id: "d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92"
	I1102 13:36:38.947907  320073 cri.go:89] found id: "0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf"
	I1102 13:36:38.947912  320073 cri.go:89] found id: "8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e"
	I1102 13:36:38.947917  320073 cri.go:89] found id: "d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	I1102 13:36:38.947919  320073 cri.go:89] found id: "7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038"
	I1102 13:36:38.947923  320073 cri.go:89] found id: "c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1"
	I1102 13:36:38.947925  320073 cri.go:89] found id: "497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0"
	I1102 13:36:38.947927  320073 cri.go:89] found id: "f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9"
	I1102 13:36:38.947930  320073 cri.go:89] found id: "2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386"
	I1102 13:36:38.947935  320073 cri.go:89] found id: "92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	I1102 13:36:38.947938  320073 cri.go:89] found id: "63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8"
	I1102 13:36:38.947940  320073 cri.go:89] found id: ""
	I1102 13:36:38.947976  320073 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:36:38.959327  320073 retry.go:31] will retry after 677.464455ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:38Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:36:39.637170  320073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:39.650431  320073 pause.go:52] kubelet running: false
	I1102 13:36:39.650502  320073 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:36:39.794460  320073 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:36:39.794556  320073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:36:39.860158  320073 cri.go:89] found id: "d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92"
	I1102 13:36:39.860179  320073 cri.go:89] found id: "0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf"
	I1102 13:36:39.860185  320073 cri.go:89] found id: "8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e"
	I1102 13:36:39.860189  320073 cri.go:89] found id: "d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	I1102 13:36:39.860193  320073 cri.go:89] found id: "7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038"
	I1102 13:36:39.860197  320073 cri.go:89] found id: "c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1"
	I1102 13:36:39.860201  320073 cri.go:89] found id: "497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0"
	I1102 13:36:39.860205  320073 cri.go:89] found id: "f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9"
	I1102 13:36:39.860208  320073 cri.go:89] found id: "2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386"
	I1102 13:36:39.860215  320073 cri.go:89] found id: "92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	I1102 13:36:39.860219  320073 cri.go:89] found id: "63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8"
	I1102 13:36:39.860222  320073 cri.go:89] found id: ""
	I1102 13:36:39.860270  320073 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:36:39.873985  320073 out.go:203] 
	W1102 13:36:39.875212  320073 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:36:39.875230  320073 out.go:285] * 
	* 
	W1102 13:36:39.879218  320073 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:36:39.880401  320073 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-054159 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054159
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	        "Created": "2025-11-02T13:34:24.271262498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:35:36.089492673Z",
	            "FinishedAt": "2025-11-02T13:35:34.238643958Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hosts",
	        "LogPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066-json.log",
	        "Name": "/old-k8s-version-054159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	                "LowerDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054159",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8572b7cb5f8f6a79da6e7886298653e60822c58ad51bf24bead535876c2dd7ab",
	            "SandboxKey": "/var/run/docker/netns/8572b7cb5f8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-054159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:1f:60:c0:6f:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4ae33975e63c84f4b70da6cb2d4c25dac69220c357b8926c3be9f60de4d8948a",
	                    "EndpointID": "b6db29e7266a2a7697cf14fff2206c7a9a27b9bc7db4e8cb37f1287734eaceda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054159",
	                        "a6f2405feedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159: exit status 2 (324.705793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25: (1.094016703s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-123357 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cri-dockerd --version                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo containerd config dump                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                        │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                        │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                          │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                              │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                          │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:36:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:36:05.435590  314692 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:05.435743  314692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:05.435755  314692 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:05.435762  314692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:05.436031  314692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:05.436677  314692 out.go:368] Setting JSON to false
	I1102 13:36:05.438502  314692 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4717,"bootTime":1762085848,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:36:05.438644  314692 start.go:143] virtualization: kvm guest
	I1102 13:36:05.441073  314692 out.go:179] * [default-k8s-diff-port-538419] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:36:05.442523  314692 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:36:05.442625  314692 notify.go:221] Checking for updates...
	I1102 13:36:05.444929  314692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:36:05.446245  314692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:05.447444  314692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:36:05.448643  314692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:36:05.449853  314692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:36:05.451841  314692 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:05.451995  314692 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:05.452137  314692 config.go:182] Loaded profile config "old-k8s-version-054159": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 13:36:05.452265  314692 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:36:05.482704  314692 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:36:05.482885  314692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:05.557426  314692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:36:05.543426305 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:05.557588  314692 docker.go:319] overlay module found
	I1102 13:36:05.560384  314692 out.go:179] * Using the docker driver based on user configuration
	I1102 13:36:05.561760  314692 start.go:309] selected driver: docker
	I1102 13:36:05.561779  314692 start.go:930] validating driver "docker" against <nil>
	I1102 13:36:05.561795  314692 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:36:05.562636  314692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:05.638262  314692 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:36:05.626506486 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:05.638476  314692 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:36:05.638823  314692 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:05.641773  314692 out.go:179] * Using Docker driver with root privileges
	I1102 13:36:05.643726  314692 cni.go:84] Creating CNI manager for ""
	I1102 13:36:05.643812  314692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:05.643827  314692 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:36:05.643913  314692 start.go:353] cluster config:
	{Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:05.645664  314692 out.go:179] * Starting "default-k8s-diff-port-538419" primary control-plane node in "default-k8s-diff-port-538419" cluster
	I1102 13:36:05.647119  314692 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:36:05.648460  314692 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:36:05.649712  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:05.649758  314692 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:36:05.649766  314692 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:36:05.649771  314692 cache.go:59] Caching tarball of preloaded images
	I1102 13:36:05.649861  314692 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:36:05.649876  314692 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:36:05.650007  314692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:36:05.650038  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json: {Name:mkd7944051edd60e9de4b9749b633bdc1f3cad40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:05.675200  314692 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:36:05.675231  314692 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:36:05.675251  314692 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:36:05.675286  314692 start.go:360] acquireMachinesLock for default-k8s-diff-port-538419: {Name:mkbdbe3f57bcc3a77e6d88e56b57947595d7b695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:05.675387  314692 start.go:364] duration metric: took 79.928µs to acquireMachinesLock for "default-k8s-diff-port-538419"
	I1102 13:36:05.675415  314692 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:05.675516  314692 start.go:125] createHost starting for "" (driver="docker")
	W1102 13:36:03.231136  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:05.236180  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:02.313984  309290 out.go:252]   - Generating certificates and keys ...
	I1102 13:36:02.314106  309290 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:36:02.314234  309290 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:36:03.069894  309290 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:36:03.245364  309290 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:36:03.895960  309290 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:36:04.128345  309290 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:36:04.472099  309290 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:36:04.472259  309290 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-748183 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:36:04.538713  309290 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:36:04.538906  309290 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-748183 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1102 13:36:04.580087  309290 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:36:05.226663  309290 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:36:05.290451  309290 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:36:05.290556  309290 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:36:05.641708  309290 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:36:05.960162  309290 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:36:06.126342  309290 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:36:06.327269  309290 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:36:06.773625  309290 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:36:06.774998  309290 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:36:06.780246  309290 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1102 13:36:04.157270  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	W1102 13:36:06.657624  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	I1102 13:36:05.678199  314692 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 13:36:05.678459  314692 start.go:159] libmachine.API.Create for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:36:05.678519  314692 client.go:173] LocalClient.Create starting
	I1102 13:36:05.678601  314692 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 13:36:05.678641  314692 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:05.678658  314692 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:05.678714  314692 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 13:36:05.678733  314692 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:05.678742  314692 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:05.679050  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:36:05.696126  314692 cli_runner.go:211] docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:36:05.696199  314692 network_create.go:284] running [docker network inspect default-k8s-diff-port-538419] to gather additional debugging logs...
	I1102 13:36:05.696222  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419
	W1102 13:36:05.715284  314692 cli_runner.go:211] docker network inspect default-k8s-diff-port-538419 returned with exit code 1
	I1102 13:36:05.715325  314692 network_create.go:287] error running [docker network inspect default-k8s-diff-port-538419]: docker network inspect default-k8s-diff-port-538419: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-538419 not found
	I1102 13:36:05.715351  314692 network_create.go:289] output of [docker network inspect default-k8s-diff-port-538419]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-538419 not found
	
	** /stderr **
	I1102 13:36:05.715499  314692 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:05.737307  314692 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
	I1102 13:36:05.738188  314692 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6e64be95e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ec:8c:d9:e4:62} reservation:<nil>}
	I1102 13:36:05.739220  314692 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce0c0e777855 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:03:0f:01:14:50} reservation:<nil>}
	I1102 13:36:05.739841  314692 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4ae33975e63c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:e4:ca:ff:f5:a7} reservation:<nil>}
	I1102 13:36:05.740727  314692 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef67f0}
	I1102 13:36:05.740760  314692 network_create.go:124] attempt to create docker network default-k8s-diff-port-538419 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 13:36:05.740823  314692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 default-k8s-diff-port-538419
	I1102 13:36:05.814279  314692 network_create.go:108] docker network default-k8s-diff-port-538419 192.168.85.0/24 created
	I1102 13:36:05.814311  314692 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-538419" container
	I1102 13:36:05.814367  314692 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:36:05.837693  314692 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-538419 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:36:05.859791  314692 oci.go:103] Successfully created a docker volume default-k8s-diff-port-538419
	I1102 13:36:05.859897  314692 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-538419-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --entrypoint /usr/bin/test -v default-k8s-diff-port-538419:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:36:06.881907  314692 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-538419-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --entrypoint /usr/bin/test -v default-k8s-diff-port-538419:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.021960601s)
	I1102 13:36:06.881936  314692 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-538419
	I1102 13:36:06.881985  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:06.882013  314692 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:36:06.882080  314692 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538419:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:36:10.275316  314692 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538419:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.393191884s)
	I1102 13:36:10.275358  314692 kic.go:203] duration metric: took 3.393343348s to extract preloaded images to volume ...
	W1102 13:36:10.275432  314692 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 13:36:10.275463  314692 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 13:36:10.275506  314692 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:36:10.335732  314692 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-538419 --name default-k8s-diff-port-538419 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-538419 --network default-k8s-diff-port-538419 --ip 192.168.85.2 --volume default-k8s-diff-port-538419:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	W1102 13:36:07.731890  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:09.844195  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:06.781931  309290 out.go:252]   - Booting up control plane ...
	I1102 13:36:06.782085  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:36:06.782226  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:36:06.783116  309290 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:36:06.801117  309290 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:36:06.801274  309290 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:36:06.808806  309290 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:36:06.809054  309290 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:36:06.809131  309290 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:36:06.934754  309290 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:36:06.934882  309290 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:36:07.935656  309290 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0010106s
	I1102 13:36:07.938500  309290 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:36:07.938641  309290 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1102 13:36:07.938767  309290 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:36:07.938871  309290 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:36:11.261013  309290 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.32231961s
	I1102 13:36:11.843760  309290 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.905255384s
	I1102 13:36:13.439964  309290 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501297656s
	I1102 13:36:13.450594  309290 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:36:13.459158  309290 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:36:13.467549  309290 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:36:13.467853  309290 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-748183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:36:13.475110  309290 kubeadm.go:319] [bootstrap-token] Using token: 3hyvdp.6f6epf4ijrgc86v7
	W1102 13:36:08.668309  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	W1102 13:36:11.157318  299350 node_ready.go:57] node "no-preload-978795" has "Ready":"False" status (will retry)
	I1102 13:36:12.656920  299350 node_ready.go:49] node "no-preload-978795" is "Ready"
	I1102 13:36:12.656948  299350 node_ready.go:38] duration metric: took 12.50324534s for node "no-preload-978795" to be "Ready" ...
	I1102 13:36:12.656965  299350 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:36:12.657024  299350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:36:12.668969  299350 api_server.go:72] duration metric: took 12.950821403s to wait for apiserver process to appear ...
	I1102 13:36:12.669000  299350 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:36:12.669028  299350 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:36:12.673841  299350 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1102 13:36:12.674694  299350 api_server.go:141] control plane version: v1.34.1
	I1102 13:36:12.674720  299350 api_server.go:131] duration metric: took 5.712193ms to wait for apiserver health ...
	I1102 13:36:12.674729  299350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:36:12.677381  299350 system_pods.go:59] 8 kube-system pods found
	I1102 13:36:12.677409  299350 system_pods.go:61] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.677415  299350 system_pods.go:61] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.677420  299350 system_pods.go:61] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.677424  299350 system_pods.go:61] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.677431  299350 system_pods.go:61] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.677436  299350 system_pods.go:61] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.677439  299350 system_pods.go:61] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.677443  299350 system_pods.go:61] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.677448  299350 system_pods.go:74] duration metric: took 2.714415ms to wait for pod list to return data ...
	I1102 13:36:12.677454  299350 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:36:12.679625  299350 default_sa.go:45] found service account: "default"
	I1102 13:36:12.679642  299350 default_sa.go:55] duration metric: took 2.182973ms for default service account to be created ...
	I1102 13:36:12.679651  299350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:36:12.682139  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:12.682173  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.682182  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.682197  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.682204  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.682214  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.682222  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.682229  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.682241  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.682264  299350 retry.go:31] will retry after 248.72906ms: missing components: kube-dns
	I1102 13:36:12.935533  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:12.935591  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:12.935603  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:12.935612  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:12.935618  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:12.935624  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:12.935629  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:12.935634  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:12.935663  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:12.935688  299350 retry.go:31] will retry after 385.867833ms: missing components: kube-dns
	I1102 13:36:13.326168  299350 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:13.326202  299350 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Running
	I1102 13:36:13.326211  299350 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running
	I1102 13:36:13.326216  299350 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:13.326222  299350 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running
	I1102 13:36:13.326227  299350 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running
	I1102 13:36:13.326232  299350 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:13.326237  299350 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running
	I1102 13:36:13.326244  299350 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Running
	I1102 13:36:13.326266  299350 system_pods.go:126] duration metric: took 646.609426ms to wait for k8s-apps to be running ...
	I1102 13:36:13.326274  299350 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:36:13.326325  299350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:13.339761  299350 system_svc.go:56] duration metric: took 13.478931ms WaitForService to wait for kubelet
	I1102 13:36:13.339793  299350 kubeadm.go:587] duration metric: took 13.621650881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:13.339815  299350 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:36:13.342730  299350 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:36:13.342755  299350 node_conditions.go:123] node cpu capacity is 8
	I1102 13:36:13.342771  299350 node_conditions.go:105] duration metric: took 2.950985ms to run NodePressure ...
	I1102 13:36:13.342785  299350 start.go:242] waiting for startup goroutines ...
	I1102 13:36:13.342794  299350 start.go:247] waiting for cluster config update ...
	I1102 13:36:13.342810  299350 start.go:256] writing updated cluster config ...
	I1102 13:36:13.343084  299350 ssh_runner.go:195] Run: rm -f paused
	I1102 13:36:13.346918  299350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:13.349803  299350 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.354285  299350 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:36:13.354304  299350 pod_ready.go:86] duration metric: took 4.475697ms for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.356253  299350 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.359951  299350 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:36:13.359969  299350 pod_ready.go:86] duration metric: took 3.697797ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.361639  299350 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.364958  299350 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:36:13.364981  299350 pod_ready.go:86] duration metric: took 3.323803ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.366580  299350 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.476104  309290 out.go:252]   - Configuring RBAC rules ...
	I1102 13:36:13.476262  309290 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:36:13.481572  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:36:13.486194  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:36:13.488347  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:36:13.490661  309290 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:36:13.492848  309290 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:36:13.847654  309290 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:36:14.264087  309290 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:36:14.845928  309290 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:36:14.846937  309290 kubeadm.go:319] 
	I1102 13:36:14.847017  309290 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:36:14.847029  309290 kubeadm.go:319] 
	I1102 13:36:14.847091  309290 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:36:14.847099  309290 kubeadm.go:319] 
	I1102 13:36:14.847119  309290 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:36:14.847177  309290 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:36:14.847252  309290 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:36:14.847271  309290 kubeadm.go:319] 
	I1102 13:36:14.847314  309290 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:36:14.847320  309290 kubeadm.go:319] 
	I1102 13:36:14.847363  309290 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:36:14.847370  309290 kubeadm.go:319] 
	I1102 13:36:14.847411  309290 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:36:14.847510  309290 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:36:14.847637  309290 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:36:14.847663  309290 kubeadm.go:319] 
	I1102 13:36:14.847795  309290 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:36:14.847901  309290 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:36:14.847910  309290 kubeadm.go:319] 
	I1102 13:36:14.848013  309290 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3hyvdp.6f6epf4ijrgc86v7 \
	I1102 13:36:14.848179  309290 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:36:14.848214  309290 kubeadm.go:319] 	--control-plane 
	I1102 13:36:14.848231  309290 kubeadm.go:319] 
	I1102 13:36:14.848347  309290 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:36:14.848358  309290 kubeadm.go:319] 
	I1102 13:36:14.848471  309290 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3hyvdp.6f6epf4ijrgc86v7 \
	I1102 13:36:14.848632  309290 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:36:14.852045  309290 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:36:14.852139  309290 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:36:14.852161  309290 cni.go:84] Creating CNI manager for ""
	I1102 13:36:14.852170  309290 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:14.854260  309290 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:36:13.750618  299350 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:36:13.750648  299350 pod_ready.go:86] duration metric: took 384.047554ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:13.951214  299350 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.351367  299350 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:36:14.351393  299350 pod_ready.go:86] duration metric: took 400.155205ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.550498  299350 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.950992  299350 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:36:14.951024  299350 pod_ready.go:86] duration metric: took 400.49724ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:14.951045  299350 pod_ready.go:40] duration metric: took 1.604102284s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:15.004484  299350 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:36:15.007265  299350 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:36:10.615016  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Running}}
	I1102 13:36:10.636417  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.654048  314692 cli_runner.go:164] Run: docker exec default-k8s-diff-port-538419 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:36:10.699729  314692 oci.go:144] the created container "default-k8s-diff-port-538419" has a running status.
	I1102 13:36:10.699770  314692 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa...
	I1102 13:36:10.762874  314692 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:36:10.790352  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.807665  314692 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:36:10.807686  314692 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-538419 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:36:10.848037  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:10.869592  314692 machine.go:94] provisionDockerMachine start ...
	I1102 13:36:10.869697  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:10.897052  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:10.897380  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:10.897399  314692 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:36:10.898419  314692 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47708->127.0.0.1:33110: read: connection reset by peer
	I1102 13:36:14.044265  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:36:14.044295  314692 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:36:14.044389  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.065074  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.065363  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.065385  314692 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:36:14.227057  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:36:14.227141  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.248649  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.248931  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.248961  314692 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:36:14.393279  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:36:14.393304  314692 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:36:14.393351  314692 ubuntu.go:190] setting up certificates
	I1102 13:36:14.393360  314692 provision.go:84] configureAuth start
	I1102 13:36:14.393407  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:14.412456  314692 provision.go:143] copyHostCerts
	I1102 13:36:14.412522  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:36:14.412536  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:36:14.412630  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:36:14.412777  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:36:14.412794  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:36:14.412843  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:36:14.412940  314692 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:36:14.412971  314692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:36:14.413013  314692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:36:14.413094  314692 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:36:14.581836  314692 provision.go:177] copyRemoteCerts
	I1102 13:36:14.581894  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:36:14.581927  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.601054  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:14.702033  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:36:14.722483  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:36:14.740538  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:36:14.758816  314692 provision.go:87] duration metric: took 365.441182ms to configureAuth
	I1102 13:36:14.758850  314692 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:36:14.759049  314692 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:14.759176  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:14.777002  314692 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:14.777225  314692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I1102 13:36:14.777242  314692 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:36:15.055729  314692 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:36:15.055754  314692 machine.go:97] duration metric: took 4.186138313s to provisionDockerMachine
	I1102 13:36:15.055767  314692 client.go:176] duration metric: took 9.377238341s to LocalClient.Create
	I1102 13:36:15.055789  314692 start.go:167] duration metric: took 9.377330266s to libmachine.API.Create "default-k8s-diff-port-538419"
	I1102 13:36:15.055802  314692 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:36:15.055817  314692 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:36:15.055889  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:36:15.055938  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.077020  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:15.194616  314692 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:36:15.200392  314692 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:36:15.200430  314692 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:36:15.200443  314692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:36:15.200499  314692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:36:15.200657  314692 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:36:15.200803  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:36:15.211740  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:15.237974  314692 start.go:296] duration metric: took 182.156913ms for postStartSetup
	I1102 13:36:15.238391  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:15.259283  314692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:36:15.259631  314692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:36:15.259686  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.279422  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:15.376664  314692 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:36:15.381089  314692 start.go:128] duration metric: took 9.705559601s to createHost
	I1102 13:36:15.381118  314692 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 9.705716798s
	I1102 13:36:15.381184  314692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:36:15.398864  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:15.398944  314692 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:15.398959  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:15.398991  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:15.399048  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:15.399094  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:15.399152  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:15.399230  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:15.399282  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:15.416861  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	W1102 13:36:12.230844  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:14.730477  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:14.855544  309290 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:36:14.860124  309290 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:36:14.860144  309290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:36:14.873520  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:36:15.117704  309290 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:36:15.117868  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.117961  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-748183 minikube.k8s.io/updated_at=2025_11_02T13_36_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=embed-certs-748183 minikube.k8s.io/primary=true
	I1102 13:36:15.131951  309290 ops.go:34] apiserver oom_adj: -16
	I1102 13:36:15.218322  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.719272  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:16.218674  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:15.530719  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:15.548152  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:15.566056  314692 ssh_runner.go:195] Run: openssl version
	I1102 13:36:15.572246  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:15.580874  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.584764  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.584830  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:15.620546  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:15.629741  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:15.638338  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.642268  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.642313  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:15.678223  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:15.687210  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:15.695816  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.699547  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.699614  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:15.747068  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:15.756607  314692 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:36:15.760925  314692 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:36:15.764886  314692 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:36:15.765004  314692 ssh_runner.go:195] Run: cat /version.json
	I1102 13:36:15.825446  314692 ssh_runner.go:195] Run: systemctl --version
	I1102 13:36:15.832220  314692 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:36:15.866819  314692 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:36:15.871788  314692 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:36:15.871857  314692 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:36:15.897972  314692 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 13:36:15.897992  314692 start.go:496] detecting cgroup driver to use...
	I1102 13:36:15.898017  314692 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:36:15.898053  314692 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:36:15.913685  314692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:36:15.926165  314692 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:36:15.926219  314692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:36:15.942756  314692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:36:15.960041  314692 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:36:16.049211  314692 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:36:16.140550  314692 docker.go:234] disabling docker service ...
	I1102 13:36:16.140652  314692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:36:16.158746  314692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:36:16.172119  314692 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:36:16.257545  314692 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:36:16.342957  314692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:36:16.356102  314692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:36:16.370528  314692 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:36:16.370607  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.380612  314692 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:36:16.380679  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.389314  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.398142  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.406653  314692 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:36:16.415009  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.424040  314692 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.437426  314692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:16.446180  314692 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:36:16.453356  314692 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:36:16.460870  314692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:16.542362  314692 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:36:16.650720  314692 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:36:16.650782  314692 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:36:16.654947  314692 start.go:564] Will wait 60s for crictl version
	I1102 13:36:16.654995  314692 ssh_runner.go:195] Run: which crictl
	I1102 13:36:16.658580  314692 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:36:16.682711  314692 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:36:16.682789  314692 ssh_runner.go:195] Run: crio --version
	I1102 13:36:16.711010  314692 ssh_runner.go:195] Run: crio --version
	I1102 13:36:16.743545  314692 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:36:16.745730  314692 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:16.769441  314692 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:36:16.774346  314692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:16.788062  314692 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:36:16.788175  314692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:16.788219  314692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:16.829012  314692 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:16.829038  314692 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:36:16.829095  314692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:16.861101  314692 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:16.861128  314692 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:36:16.861137  314692 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:36:16.861232  314692 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:36:16.861306  314692 ssh_runner.go:195] Run: crio config
	I1102 13:36:16.911369  314692 cni.go:84] Creating CNI manager for ""
	I1102 13:36:16.911403  314692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:16.911419  314692 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:36:16.911451  314692 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:36:16.911656  314692 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:36:16.911714  314692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:36:16.920055  314692 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:36:16.920130  314692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:36:16.928948  314692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:36:16.943385  314692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:36:16.958903  314692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:36:16.972807  314692 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:36:16.976645  314692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:16.988026  314692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:17.070379  314692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:17.095169  314692 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:36:17.095194  314692 certs.go:195] generating shared ca certs ...
	I1102 13:36:17.095216  314692 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.095404  314692 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:36:17.095471  314692 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:36:17.095486  314692 certs.go:257] generating profile certs ...
	I1102 13:36:17.095574  314692 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:36:17.095593  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt with IP's: []
	I1102 13:36:17.314527  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt ...
	I1102 13:36:17.314554  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.crt: {Name:mk33b2e40a938c6fe809d4a8e985371cc5806071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.314759  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key ...
	I1102 13:36:17.314780  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key: {Name:mkd72b526d33383930c74a87122742dae4f9c1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.314876  314692 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:36:17.314901  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1102 13:36:17.676789  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d ...
	I1102 13:36:17.676816  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d: {Name:mkd6b2fb9849a4b5918b1a6c11ed704b30cbfc7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.676982  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d ...
	I1102 13:36:17.676996  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d: {Name:mk943eea6a4f97dc7db9628f9cf8c6ad9a1a0ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.677066  314692 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt.ff08289d -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt
	I1102 13:36:17.677142  314692 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key
	I1102 13:36:17.677195  314692 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:36:17.677211  314692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt with IP's: []
	I1102 13:36:17.795777  314692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt ...
	I1102 13:36:17.795804  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt: {Name:mk4445f5eac21c77379a3af06cd000490a3c92e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.795964  314692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key ...
	I1102 13:36:17.795978  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key: {Name:mk2cd1fcae7ce412117f843d403ccb7295c5d3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:17.796162  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:17.796197  314692 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:17.796207  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:17.796227  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:17.796248  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:17.796268  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:17.796308  314692 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:17.796818  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:36:17.815829  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:36:17.833605  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:36:17.851062  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:36:17.869100  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:36:17.886289  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:36:17.903502  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:36:17.921328  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:36:17.938765  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:17.955853  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:17.973411  314692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:17.991438  314692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:36:18.004750  314692 ssh_runner.go:195] Run: openssl version
	I1102 13:36:18.011970  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:18.021125  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.025417  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.025470  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:18.062923  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:18.071543  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:18.080340  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.084122  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.084184  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:18.120469  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:18.129115  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:18.137634  314692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.141539  314692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.141602  314692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:18.178120  314692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:18.186380  314692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:36:18.190139  314692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:36:18.190202  314692 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:18.190263  314692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:36:18.190314  314692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:36:18.219699  314692 cri.go:89] found id: ""
	I1102 13:36:18.219766  314692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:36:18.229163  314692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:36:18.237732  314692 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:36:18.237794  314692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:36:18.246294  314692 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:36:18.246318  314692 kubeadm.go:158] found existing configuration files:
	
	I1102 13:36:18.246380  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1102 13:36:18.254067  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:36:18.254132  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:36:18.262448  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1102 13:36:18.271074  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:36:18.271134  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:36:18.279732  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1102 13:36:18.288400  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:36:18.288479  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:36:18.295833  314692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1102 13:36:18.303350  314692 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:36:18.303425  314692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:36:18.311139  314692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:36:18.350216  314692 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:36:18.350300  314692 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:36:18.372987  314692 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:36:18.373101  314692 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:36:18.373155  314692 kubeadm.go:319] OS: Linux
	I1102 13:36:18.373229  314692 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:36:18.373304  314692 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:36:18.373389  314692 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:36:18.373468  314692 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:36:18.373533  314692 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:36:18.373617  314692 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:36:18.373718  314692 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:36:18.373816  314692 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:36:18.432435  314692 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:36:18.432557  314692 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:36:18.432690  314692 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:36:18.439861  314692 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:36:16.719393  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:17.218927  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:17.718523  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:18.218633  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:18.718482  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.219410  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.718732  309290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:19.800901  309290 kubeadm.go:1114] duration metric: took 4.683077814s to wait for elevateKubeSystemPrivileges
	I1102 13:36:19.800946  309290 kubeadm.go:403] duration metric: took 17.795902376s to StartCluster
	I1102 13:36:19.800968  309290 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:19.801034  309290 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:19.803196  309290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:19.803463  309290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:36:19.803485  309290 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:19.803546  309290 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:36:19.803663  309290 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-748183"
	I1102 13:36:19.803675  309290 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:19.803681  309290 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-748183"
	I1102 13:36:19.803696  309290 addons.go:70] Setting default-storageclass=true in profile "embed-certs-748183"
	I1102 13:36:19.803725  309290 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:36:19.803731  309290 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-748183"
	I1102 13:36:19.804096  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.804273  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.805888  309290 out.go:179] * Verifying Kubernetes components...
	I1102 13:36:19.807211  309290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:19.828857  309290 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:36:19.829978  309290 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:19.829998  309290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:36:19.830053  309290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:36:19.831353  309290 addons.go:239] Setting addon default-storageclass=true in "embed-certs-748183"
	I1102 13:36:19.831395  309290 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:36:19.831872  309290 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:36:19.860998  309290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:36:19.861418  309290 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:19.861453  309290 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:36:19.861517  309290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:36:19.885111  309290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:36:19.900279  309290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:36:19.943668  309290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:19.986109  309290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:19.998900  309290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:20.079731  309290 start.go:1013] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1102 13:36:20.081284  309290 node_ready.go:35] waiting up to 6m0s for node "embed-certs-748183" to be "Ready" ...
	I1102 13:36:20.304961  309290 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:36:18.442017  314692 out.go:252]   - Generating certificates and keys ...
	I1102 13:36:18.442121  314692 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:36:18.442224  314692 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:36:18.730940  314692 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:36:18.858951  314692 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:36:19.384727  314692 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:36:19.412774  314692 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:36:19.902782  314692 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:36:19.902998  314692 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-538419 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 13:36:20.222663  314692 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:36:20.222887  314692 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-538419 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1102 13:36:16.730904  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:18.740498  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:20.305958  309290 addons.go:515] duration metric: took 502.411848ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:36:20.585139  309290 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-748183" context rescaled to 1 replicas
	I1102 13:36:20.484633  314692 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:36:20.576482  314692 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:36:20.683047  314692 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:36:20.683224  314692 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:36:21.190668  314692 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:36:21.263301  314692 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:36:22.036334  314692 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:36:22.143162  314692 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:36:22.245254  314692 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:36:22.245836  314692 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:36:22.249649  314692 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:36:22.251184  314692 out.go:252]   - Booting up control plane ...
	I1102 13:36:22.251279  314692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:36:22.251374  314692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:36:22.251840  314692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:36:22.265479  314692 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:36:22.265633  314692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:36:22.272212  314692 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:36:22.272492  314692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:36:22.272550  314692 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:36:22.369814  314692 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:36:22.369957  314692 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:36:22.871448  314692 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.780894ms
	I1102 13:36:22.876197  314692 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:36:22.876311  314692 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1102 13:36:22.876436  314692 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:36:22.876574  314692 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:36:24.074633  314692 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.198248881s
	I1102 13:36:25.001809  314692 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.125401119s
	W1102 13:36:21.230432  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	W1102 13:36:23.730503  304242 pod_ready.go:104] pod "coredns-5dd5756b68-th5sb" is not "Ready", error: <nil>
	I1102 13:36:24.232882  304242 pod_ready.go:94] pod "coredns-5dd5756b68-th5sb" is "Ready"
	I1102 13:36:24.232912  304242 pod_ready.go:86] duration metric: took 34.508409705s for pod "coredns-5dd5756b68-th5sb" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.236659  304242 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.241435  304242 pod_ready.go:94] pod "etcd-old-k8s-version-054159" is "Ready"
	I1102 13:36:24.241469  304242 pod_ready.go:86] duration metric: took 4.781218ms for pod "etcd-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.244632  304242 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.249842  304242 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-054159" is "Ready"
	I1102 13:36:24.249866  304242 pod_ready.go:86] duration metric: took 5.211514ms for pod "kube-apiserver-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.253266  304242 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.429418  304242 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-054159" is "Ready"
	I1102 13:36:24.429446  304242 pod_ready.go:86] duration metric: took 176.154261ms for pod "kube-controller-manager-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:24.629413  304242 pod_ready.go:83] waiting for pod "kube-proxy-l2sh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:25.028083  304242 pod_ready.go:94] pod "kube-proxy-l2sh4" is "Ready"
	I1102 13:36:25.028106  304242 pod_ready.go:86] duration metric: took 398.665268ms for pod "kube-proxy-l2sh4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:25.229916  304242 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:25.628878  304242 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-054159" is "Ready"
	I1102 13:36:25.628905  304242 pod_ready.go:86] duration metric: took 398.952764ms for pod "kube-scheduler-old-k8s-version-054159" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:25.628917  304242 pod_ready.go:40] duration metric: took 35.910584142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:25.674488  304242 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1102 13:36:25.676051  304242 out.go:203] 
	W1102 13:36:25.677271  304242 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1102 13:36:25.678438  304242 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1102 13:36:25.679778  304242 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-054159" cluster and "default" namespace by default
	W1102 13:36:22.084911  309290 node_ready.go:57] node "embed-certs-748183" has "Ready":"False" status (will retry)
	W1102 13:36:24.085239  309290 node_ready.go:57] node "embed-certs-748183" has "Ready":"False" status (will retry)
	I1102 13:36:26.877979  314692 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00173045s
	I1102 13:36:26.889965  314692 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:36:26.898357  314692 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:36:26.906975  314692 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:36:26.907241  314692 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-538419 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:36:26.914882  314692 kubeadm.go:319] [bootstrap-token] Using token: ww4suq.9az29vtn3yo23i6u
	I1102 13:36:26.916306  314692 out.go:252]   - Configuring RBAC rules ...
	I1102 13:36:26.916461  314692 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:36:26.920885  314692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:36:26.929139  314692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:36:26.931914  314692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:36:26.934513  314692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:36:26.936868  314692 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:36:27.284698  314692 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:36:27.700159  314692 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:36:28.284246  314692 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:36:28.285207  314692 kubeadm.go:319] 
	I1102 13:36:28.285289  314692 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:36:28.285299  314692 kubeadm.go:319] 
	I1102 13:36:28.285387  314692 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:36:28.285395  314692 kubeadm.go:319] 
	I1102 13:36:28.285417  314692 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:36:28.285500  314692 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:36:28.285554  314692 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:36:28.285581  314692 kubeadm.go:319] 
	I1102 13:36:28.285647  314692 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:36:28.285653  314692 kubeadm.go:319] 
	I1102 13:36:28.285697  314692 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:36:28.285706  314692 kubeadm.go:319] 
	I1102 13:36:28.285750  314692 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:36:28.285825  314692 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:36:28.285887  314692 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:36:28.285893  314692 kubeadm.go:319] 
	I1102 13:36:28.285962  314692 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:36:28.286047  314692 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:36:28.286054  314692 kubeadm.go:319] 
	I1102 13:36:28.286136  314692 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token ww4suq.9az29vtn3yo23i6u \
	I1102 13:36:28.286229  314692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:36:28.286257  314692 kubeadm.go:319] 	--control-plane 
	I1102 13:36:28.286266  314692 kubeadm.go:319] 
	I1102 13:36:28.286353  314692 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:36:28.286361  314692 kubeadm.go:319] 
	I1102 13:36:28.286443  314692 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token ww4suq.9az29vtn3yo23i6u \
	I1102 13:36:28.286613  314692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:36:28.289557  314692 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:36:28.289688  314692 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:36:28.289701  314692 cni.go:84] Creating CNI manager for ""
	I1102 13:36:28.289707  314692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:28.291936  314692 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:36:28.293118  314692 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:36:28.297379  314692 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:36:28.297398  314692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:36:28.310588  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:36:28.519196  314692 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:36:28.519322  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-538419 minikube.k8s.io/updated_at=2025_11_02T13_36_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=default-k8s-diff-port-538419 minikube.k8s.io/primary=true
	I1102 13:36:28.519349  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:28.529129  314692 ops.go:34] apiserver oom_adj: -16
	I1102 13:36:28.621071  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:29.121744  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:29.621423  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:30.122130  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 13:36:26.584457  309290 node_ready.go:57] node "embed-certs-748183" has "Ready":"False" status (will retry)
	W1102 13:36:28.585141  309290 node_ready.go:57] node "embed-certs-748183" has "Ready":"False" status (will retry)
	W1102 13:36:31.084790  309290 node_ready.go:57] node "embed-certs-748183" has "Ready":"False" status (will retry)
	I1102 13:36:30.621609  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:31.121198  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:31.621763  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:32.121770  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:32.621265  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:33.121205  314692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:36:33.190968  314692 kubeadm.go:1114] duration metric: took 4.671693397s to wait for elevateKubeSystemPrivileges
	I1102 13:36:33.191013  314692 kubeadm.go:403] duration metric: took 15.000811874s to StartCluster
	I1102 13:36:33.191035  314692 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:33.191123  314692 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:33.193589  314692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:33.193859  314692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:36:33.193883  314692 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:33.193948  314692 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:36:33.194029  314692 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:36:33.194070  314692 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	I1102 13:36:33.194102  314692 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:36:33.194166  314692 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:33.194236  314692 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:36:33.194262  314692 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:36:33.194629  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:33.196042  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:33.197935  314692 out.go:179] * Verifying Kubernetes components...
	I1102 13:36:33.199147  314692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:33.222337  314692 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	I1102 13:36:33.222396  314692 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:36:33.222341  314692 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:36:33.222919  314692 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:36:33.223918  314692 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:33.223936  314692 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:36:33.223985  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:33.246156  314692 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:33.246186  314692 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:36:33.246244  314692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:36:33.249307  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:33.271138  314692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:36:33.289003  314692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:36:33.346354  314692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:33.370537  314692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:33.390381  314692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:33.469684  314692 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1102 13:36:33.471497  314692 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:36:33.689421  314692 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:36:31.584855  309290 node_ready.go:49] node "embed-certs-748183" is "Ready"
	I1102 13:36:31.584886  309290 node_ready.go:38] duration metric: took 11.503552941s for node "embed-certs-748183" to be "Ready" ...
	I1102 13:36:31.584900  309290 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:36:31.584948  309290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:36:31.596698  309290 api_server.go:72] duration metric: took 11.793181765s to wait for apiserver process to appear ...
	I1102 13:36:31.596726  309290 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:36:31.596747  309290 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1102 13:36:31.600845  309290 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1102 13:36:31.601726  309290 api_server.go:141] control plane version: v1.34.1
	I1102 13:36:31.601746  309290 api_server.go:131] duration metric: took 5.014259ms to wait for apiserver health ...
	I1102 13:36:31.601753  309290 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:36:31.606291  309290 system_pods.go:59] 8 kube-system pods found
	I1102 13:36:31.606321  309290 system_pods.go:61] "coredns-66bc5c9577-vpq66" [7cee8886-e5d7-42dd-a915-93e05be996a9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:31.606327  309290 system_pods.go:61] "etcd-embed-certs-748183" [913d1a23-a3e5-4ffc-81c0-f67ccdc4fcc0] Running
	I1102 13:36:31.606335  309290 system_pods.go:61] "kindnet-9zwww" [5d29cb5a-067d-48b9-b7d0-aa53c7388404] Running
	I1102 13:36:31.606340  309290 system_pods.go:61] "kube-apiserver-embed-certs-748183" [ac44b5bb-4e44-458e-aa9c-c3da26210878] Running
	I1102 13:36:31.606345  309290 system_pods.go:61] "kube-controller-manager-embed-certs-748183" [6cf6806a-4165-48c0-bbe2-6ac7af3cd7e6] Running
	I1102 13:36:31.606352  309290 system_pods.go:61] "kube-proxy-pg8nt" [77fcda6d-78b8-4676-8c99-dfc0395b397e] Running
	I1102 13:36:31.606357  309290 system_pods.go:61] "kube-scheduler-embed-certs-748183" [06623fe9-3708-4ace-83f1-55ea8c01ee0e] Running
	I1102 13:36:31.606363  309290 system_pods.go:61] "storage-provisioner" [c7a07ab4-2946-460b-92f5-8b648ed13a68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:31.606374  309290 system_pods.go:74] duration metric: took 4.61559ms to wait for pod list to return data ...
	I1102 13:36:31.606385  309290 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:36:31.608738  309290 default_sa.go:45] found service account: "default"
	I1102 13:36:31.608758  309290 default_sa.go:55] duration metric: took 2.368845ms for default service account to be created ...
	I1102 13:36:31.608766  309290 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:36:31.611162  309290 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:31.611191  309290 system_pods.go:89] "coredns-66bc5c9577-vpq66" [7cee8886-e5d7-42dd-a915-93e05be996a9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:31.611197  309290 system_pods.go:89] "etcd-embed-certs-748183" [913d1a23-a3e5-4ffc-81c0-f67ccdc4fcc0] Running
	I1102 13:36:31.611202  309290 system_pods.go:89] "kindnet-9zwww" [5d29cb5a-067d-48b9-b7d0-aa53c7388404] Running
	I1102 13:36:31.611206  309290 system_pods.go:89] "kube-apiserver-embed-certs-748183" [ac44b5bb-4e44-458e-aa9c-c3da26210878] Running
	I1102 13:36:31.611213  309290 system_pods.go:89] "kube-controller-manager-embed-certs-748183" [6cf6806a-4165-48c0-bbe2-6ac7af3cd7e6] Running
	I1102 13:36:31.611217  309290 system_pods.go:89] "kube-proxy-pg8nt" [77fcda6d-78b8-4676-8c99-dfc0395b397e] Running
	I1102 13:36:31.611220  309290 system_pods.go:89] "kube-scheduler-embed-certs-748183" [06623fe9-3708-4ace-83f1-55ea8c01ee0e] Running
	I1102 13:36:31.611226  309290 system_pods.go:89] "storage-provisioner" [c7a07ab4-2946-460b-92f5-8b648ed13a68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:31.611247  309290 retry.go:31] will retry after 310.465432ms: missing components: kube-dns
	I1102 13:36:31.926495  309290 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:31.926534  309290 system_pods.go:89] "coredns-66bc5c9577-vpq66" [7cee8886-e5d7-42dd-a915-93e05be996a9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:31.926543  309290 system_pods.go:89] "etcd-embed-certs-748183" [913d1a23-a3e5-4ffc-81c0-f67ccdc4fcc0] Running
	I1102 13:36:31.926551  309290 system_pods.go:89] "kindnet-9zwww" [5d29cb5a-067d-48b9-b7d0-aa53c7388404] Running
	I1102 13:36:31.926556  309290 system_pods.go:89] "kube-apiserver-embed-certs-748183" [ac44b5bb-4e44-458e-aa9c-c3da26210878] Running
	I1102 13:36:31.926576  309290 system_pods.go:89] "kube-controller-manager-embed-certs-748183" [6cf6806a-4165-48c0-bbe2-6ac7af3cd7e6] Running
	I1102 13:36:31.926581  309290 system_pods.go:89] "kube-proxy-pg8nt" [77fcda6d-78b8-4676-8c99-dfc0395b397e] Running
	I1102 13:36:31.926586  309290 system_pods.go:89] "kube-scheduler-embed-certs-748183" [06623fe9-3708-4ace-83f1-55ea8c01ee0e] Running
	I1102 13:36:31.926593  309290 system_pods.go:89] "storage-provisioner" [c7a07ab4-2946-460b-92f5-8b648ed13a68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:31.926613  309290 retry.go:31] will retry after 276.571366ms: missing components: kube-dns
	I1102 13:36:32.207483  309290 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:32.207520  309290 system_pods.go:89] "coredns-66bc5c9577-vpq66" [7cee8886-e5d7-42dd-a915-93e05be996a9] Running
	I1102 13:36:32.207528  309290 system_pods.go:89] "etcd-embed-certs-748183" [913d1a23-a3e5-4ffc-81c0-f67ccdc4fcc0] Running
	I1102 13:36:32.207533  309290 system_pods.go:89] "kindnet-9zwww" [5d29cb5a-067d-48b9-b7d0-aa53c7388404] Running
	I1102 13:36:32.207538  309290 system_pods.go:89] "kube-apiserver-embed-certs-748183" [ac44b5bb-4e44-458e-aa9c-c3da26210878] Running
	I1102 13:36:32.207544  309290 system_pods.go:89] "kube-controller-manager-embed-certs-748183" [6cf6806a-4165-48c0-bbe2-6ac7af3cd7e6] Running
	I1102 13:36:32.207549  309290 system_pods.go:89] "kube-proxy-pg8nt" [77fcda6d-78b8-4676-8c99-dfc0395b397e] Running
	I1102 13:36:32.207554  309290 system_pods.go:89] "kube-scheduler-embed-certs-748183" [06623fe9-3708-4ace-83f1-55ea8c01ee0e] Running
	I1102 13:36:32.207558  309290 system_pods.go:89] "storage-provisioner" [c7a07ab4-2946-460b-92f5-8b648ed13a68] Running
	I1102 13:36:32.207580  309290 system_pods.go:126] duration metric: took 598.807681ms to wait for k8s-apps to be running ...
	I1102 13:36:32.207590  309290 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:36:32.207638  309290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:32.221556  309290 system_svc.go:56] duration metric: took 13.958351ms WaitForService to wait for kubelet
	I1102 13:36:32.221599  309290 kubeadm.go:587] duration metric: took 12.418085582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:32.221621  309290 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:36:32.224351  309290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:36:32.224390  309290 node_conditions.go:123] node cpu capacity is 8
	I1102 13:36:32.224405  309290 node_conditions.go:105] duration metric: took 2.778535ms to run NodePressure ...
	I1102 13:36:32.224420  309290 start.go:242] waiting for startup goroutines ...
	I1102 13:36:32.224435  309290 start.go:247] waiting for cluster config update ...
	I1102 13:36:32.224450  309290 start.go:256] writing updated cluster config ...
	I1102 13:36:32.224794  309290 ssh_runner.go:195] Run: rm -f paused
	I1102 13:36:32.228705  309290 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:32.232244  309290 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.236431  309290 pod_ready.go:94] pod "coredns-66bc5c9577-vpq66" is "Ready"
	I1102 13:36:32.236456  309290 pod_ready.go:86] duration metric: took 4.188732ms for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.238190  309290 pod_ready.go:83] waiting for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.241621  309290 pod_ready.go:94] pod "etcd-embed-certs-748183" is "Ready"
	I1102 13:36:32.241641  309290 pod_ready.go:86] duration metric: took 3.429138ms for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.243246  309290 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.246615  309290 pod_ready.go:94] pod "kube-apiserver-embed-certs-748183" is "Ready"
	I1102 13:36:32.246632  309290 pod_ready.go:86] duration metric: took 3.365951ms for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.248218  309290 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.633250  309290 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748183" is "Ready"
	I1102 13:36:32.633275  309290 pod_ready.go:86] duration metric: took 385.039131ms for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:32.834472  309290 pod_ready.go:83] waiting for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:33.234224  309290 pod_ready.go:94] pod "kube-proxy-pg8nt" is "Ready"
	I1102 13:36:33.234259  309290 pod_ready.go:86] duration metric: took 399.647469ms for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:33.435115  309290 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:33.832996  309290 pod_ready.go:94] pod "kube-scheduler-embed-certs-748183" is "Ready"
	I1102 13:36:33.833025  309290 pod_ready.go:86] duration metric: took 397.875813ms for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:33.833039  309290 pod_ready.go:40] duration metric: took 1.604291847s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:33.877416  309290 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:36:33.880534  309290 out.go:179] * Done! kubectl is now configured to use "embed-certs-748183" cluster and "default" namespace by default
	I1102 13:36:33.690548  314692 addons.go:515] duration metric: took 496.597795ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:36:33.975041  314692 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-538419" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Nov 02 13:36:07 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:07.71890446Z" level=info msg="Started container" PID=1759 containerID=289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper id=32bc803e-5a41-44cf-9e10-2ae76b157920 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e237d65aaf68e5b386b2de2ae8649085b5fa1a125c11d3702e140233b6557477
	Nov 02 13:36:08 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:08.678592015Z" level=info msg="Removing container: c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b" id=de1bf873-dad7-42e2-801b-741fbcd56d7a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:08 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:08.776127989Z" level=info msg="Removed container c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=de1bf873-dad7-42e2-801b-741fbcd56d7a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.701273481Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=18037022-2418-45f2-88c2-b2c861003e6a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.702123031Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e665b747-82d5-4e59-9aa7-a730294bf676 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.703118953Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=93b948ba-071d-450c-929f-774aadaae2a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.703243533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.707706554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.707948363Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3aced1212329af6f3832ec7611af0c1f43ce2c48f761d9d0a5bc83e31014d7bf/merged/etc/passwd: no such file or directory"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.70798145Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3aced1212329af6f3832ec7611af0c1f43ce2c48f761d9d0a5bc83e31014d7bf/merged/etc/group: no such file or directory"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.708257609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.741369174Z" level=info msg="Created container d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92: kube-system/storage-provisioner/storage-provisioner" id=93b948ba-071d-450c-929f-774aadaae2a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.741994427Z" level=info msg="Starting container: d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92" id=3c3257e4-b975-453e-8b5f-e1afc1fa67e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.744179723Z" level=info msg="Started container" PID=1773 containerID=d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92 description=kube-system/storage-provisioner/storage-provisioner id=3c3257e4-b975-453e-8b5f-e1afc1fa67e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e2dd81ed18b48fe929deae8946a052129bd8ff59139fb2534da90954ab8ab3a
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.585488492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a8eac34-b1e2-49a0-aaa4-095db307fe96 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.58642057Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c1b173d4-cd92-4c3c-94c3-7d04da7fac8b name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.587441021Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=3e32dc8d-0c14-4c1c-9da2-385b2c9050d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.587587951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.593982332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.594625922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.620365074Z" level=info msg="Created container 92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=3e32dc8d-0c14-4c1c-9da2-385b2c9050d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.62097265Z" level=info msg="Starting container: 92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3" id=74d45c23-d247-41d5-b164-17c5b54b4ad6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.622691064Z" level=info msg="Started container" PID=1787 containerID=92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper id=74d45c23-d247-41d5-b164-17c5b54b4ad6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e237d65aaf68e5b386b2de2ae8649085b5fa1a125c11d3702e140233b6557477
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.712778349Z" level=info msg="Removing container: 289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e" id=93e984de-c828-4eb8-a125-7b57525e0172 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.722597507Z" level=info msg="Removed container 289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=93e984de-c828-4eb8-a125-7b57525e0172 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	92a13e0f84643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   e237d65aaf68e       dashboard-metrics-scraper-5f989dc9cf-7rbwz       kubernetes-dashboard
	d3d8c0b913276       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   3e2dd81ed18b4       storage-provisioner                              kube-system
	63d903272d477       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   7de88829d3dfe       kubernetes-dashboard-8694d4445c-4njq9            kubernetes-dashboard
	0f5b7a2354445       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   8e8ca21ae77e4       coredns-5dd5756b68-th5sb                         kube-system
	fbcb0e84c0b1e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   8e0cf11f10672       busybox                                          default
	8a9186d756f77       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   a6e0fdd4aeb07       kube-proxy-l2sh4                                 kube-system
	d9a922735c457       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   3e2dd81ed18b4       storage-provisioner                              kube-system
	7e383c881d9d3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   ca1a6529d8291       kindnet-cmgvz                                    kube-system
	c7536c075630a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   1c70923cd8a06       kube-apiserver-old-k8s-version-054159            kube-system
	497de31f5a58a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   32b79629979f2       kube-controller-manager-old-k8s-version-054159   kube-system
	f4e2888d6cf26       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   a5b796afd764f       kube-scheduler-old-k8s-version-054159            kube-system
	2b6bce8320e43       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   a4d6f01c14502       etcd-old-k8s-version-054159                      kube-system
	
	
	==> coredns [0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42509 - 64385 "HINFO IN 6759848681941079425.8368195505754998186. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019674026s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-054159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_34_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:34:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:36:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:35:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054159
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4d09a63f-c542-4c8f-a08b-d437451b349c
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-th5sb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-054159                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-cmgvz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-054159             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-054159    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-l2sh4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-054159             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7rbwz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4njq9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node old-k8s-version-054159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-054159 event: Registered Node old-k8s-version-054159 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-054159 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-054159 event: Registered Node old-k8s-version-054159 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386] <==
	{"level":"info","ts":"2025-11-02T13:35:45.179422Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-02T13:35:45.179697Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-02T13:35:45.179782Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-02T13:35:45.179868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:35:45.179902Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:35:46.563938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.56507Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-054159 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T13:35:46.565329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T13:35:46.565381Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-02T13:35:46.565084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:35:46.566932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T13:35:46.569075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:35:46.574346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-02T13:36:09.839546Z","caller":"traceutil/trace.go:171","msg":"trace[1554839882] linearizableReadLoop","detail":"{readStateIndex:616; appliedIndex:614; }","duration":"127.400594ms","start":"2025-11-02T13:36:09.712125Z","end":"2025-11-02T13:36:09.839526Z","steps":["trace[1554839882] 'read index received'  (duration: 30.721317ms)","trace[1554839882] 'applied index is now lower than readState.Index'  (duration: 96.678406ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-02T13:36:09.839642Z","caller":"traceutil/trace.go:171","msg":"trace[1612820265] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"153.348876ms","start":"2025-11-02T13:36:09.686269Z","end":"2025-11-02T13:36:09.839617Z","steps":["trace[1612820265] 'process raft request'  (duration: 146.275003ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:36:09.839721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.600199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T13:36:09.839777Z","caller":"traceutil/trace.go:171","msg":"trace[574030839] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:587; }","duration":"127.679631ms","start":"2025-11-02T13:36:09.712088Z","end":"2025-11-02T13:36:09.839768Z","steps":["trace[574030839] 'agreement among raft nodes before linearized reading'  (duration: 127.564378ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:36:09.839782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.236424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-th5sb\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-11-02T13:36:09.839824Z","caller":"traceutil/trace.go:171","msg":"trace[1634754425] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-th5sb; range_end:; response_count:1; response_revision:587; }","duration":"112.287115ms","start":"2025-11-02T13:36:09.727525Z","end":"2025-11-02T13:36:09.839812Z","steps":["trace[1634754425] 'agreement among raft nodes before linearized reading'  (duration: 112.195553ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:36:41 up  1:19,  0 user,  load average: 4.19, 4.06, 2.61
	Linux old-k8s-version-054159 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038] <==
	I1102 13:35:49.205014       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:35:49.205417       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:35:49.234973       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:35:49.235072       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:35:49.235102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:35:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:35:49.445439       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:35:49.445466       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:35:49.445476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:35:49.449625       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:35:49.802819       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:35:49.802859       1 metrics.go:72] Registering metrics
	I1102 13:35:49.802931       1 controller.go:711] "Syncing nftables rules"
	I1102 13:35:59.445130       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:35:59.445189       1 main.go:301] handling current node
	I1102 13:36:09.445764       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:09.445801       1 main.go:301] handling current node
	I1102 13:36:19.445857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:19.445890       1 main.go:301] handling current node
	I1102 13:36:29.449676       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:29.449720       1 main.go:301] handling current node
	I1102 13:36:39.448631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:39.448668       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1] <==
	I1102 13:35:48.146733       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1102 13:35:48.222505       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:35:48.249892       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:35:48.253795       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1102 13:35:48.253875       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1102 13:35:48.254205       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 13:35:48.255323       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1102 13:35:48.255365       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 13:35:48.255392       1 shared_informer.go:318] Caches are synced for configmaps
	I1102 13:35:48.255862       1 aggregator.go:166] initial CRD sync complete...
	I1102 13:35:48.255874       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 13:35:48.255881       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:35:48.255889       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:35:48.285040       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1102 13:35:49.150023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:35:49.547735       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 13:35:49.586592       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 13:35:49.606861       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:35:49.614615       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:35:49.623382       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 13:35:49.663702       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.107.146"}
	I1102 13:35:49.675662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.76.88"}
	I1102 13:36:00.531431       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1102 13:36:00.533682       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:00.640773       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0] <==
	I1102 13:36:00.603839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.752506ms"
	I1102 13:36:00.603898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.267814ms"
	I1102 13:36:00.616402       1 shared_informer.go:318] Caches are synced for endpoint
	I1102 13:36:00.621186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.163978ms"
	I1102 13:36:00.623062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.759µs"
	I1102 13:36:00.629756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.787397ms"
	I1102 13:36:00.639929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.01µs"
	I1102 13:36:00.653506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.701139ms"
	I1102 13:36:00.653633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.647µs"
	I1102 13:36:00.664842       1 shared_informer.go:318] Caches are synced for persistent volume
	I1102 13:36:00.671116       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1102 13:36:00.750773       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:36:00.759658       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:36:01.097378       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:36:01.159891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:36:01.159998       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 13:36:04.694934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.959204ms"
	I1102 13:36:04.695287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.427µs"
	I1102 13:36:07.681177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.555µs"
	I1102 13:36:08.777632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.897µs"
	I1102 13:36:09.841747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.532µs"
	I1102 13:36:22.723269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.132µs"
	I1102 13:36:24.153803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.407701ms"
	I1102 13:36:24.154098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.362µs"
	I1102 13:36:30.902097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.713µs"
	
	
	==> kube-proxy [8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e] <==
	I1102 13:35:49.102657       1 server_others.go:69] "Using iptables proxy"
	I1102 13:35:49.128932       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 13:35:49.189857       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:35:49.198280       1 server_others.go:152] "Using iptables Proxier"
	I1102 13:35:49.198330       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 13:35:49.198339       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 13:35:49.198366       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 13:35:49.198691       1 server.go:846] "Version info" version="v1.28.0"
	I1102 13:35:49.198711       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:35:49.199632       1 config.go:188] "Starting service config controller"
	I1102 13:35:49.199852       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 13:35:49.199798       1 config.go:97] "Starting endpoint slice config controller"
	I1102 13:35:49.200246       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 13:35:49.200134       1 config.go:315] "Starting node config controller"
	I1102 13:35:49.200791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 13:35:49.300665       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 13:35:49.300728       1 shared_informer.go:318] Caches are synced for service config
	I1102 13:35:49.301636       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9] <==
	I1102 13:35:45.660294       1 serving.go:348] Generated self-signed cert in-memory
	W1102 13:35:48.217793       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:35:48.217952       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:35:48.218012       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:35:48.218050       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:35:48.239249       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1102 13:35:48.239343       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:35:48.242871       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1102 13:35:48.242987       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1102 13:35:48.243083       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:35:48.244020       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1102 13:35:48.344824       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687204     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm77l\" (UniqueName: \"kubernetes.io/projected/072adefb-0813-4d34-9eab-d29bfbadd004-kube-api-access-sm77l\") pod \"kubernetes-dashboard-8694d4445c-4njq9\" (UID: \"072adefb-0813-4d34-9eab-d29bfbadd004\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687632     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/072adefb-0813-4d34-9eab-d29bfbadd004-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4njq9\" (UID: \"072adefb-0813-4d34-9eab-d29bfbadd004\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687851     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ffbdffe-6393-4f5a-8891-f81dd96d8077-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7rbwz\" (UID: \"3ffbdffe-6393-4f5a-8891-f81dd96d8077\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.688050     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9gw\" (UniqueName: \"kubernetes.io/projected/3ffbdffe-6393-4f5a-8891-f81dd96d8077-kube-api-access-hm9gw\") pod \"dashboard-metrics-scraper-5f989dc9cf-7rbwz\" (UID: \"3ffbdffe-6393-4f5a-8891-f81dd96d8077\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz"
	Nov 02 13:36:04 old-k8s-version-054159 kubelet[742]: I1102 13:36:04.685814     742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9" podStartSLOduration=1.21630703 podCreationTimestamp="2025-11-02 13:36:00 +0000 UTC" firstStartedPulling="2025-11-02 13:36:00.928071255 +0000 UTC m=+16.447571918" lastFinishedPulling="2025-11-02 13:36:04.397509945 +0000 UTC m=+19.917010607" observedRunningTime="2025-11-02 13:36:04.682249772 +0000 UTC m=+20.201750466" watchObservedRunningTime="2025-11-02 13:36:04.685745719 +0000 UTC m=+20.205246393"
	Nov 02 13:36:07 old-k8s-version-054159 kubelet[742]: I1102 13:36:07.669740     742 scope.go:117] "RemoveContainer" containerID="c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: I1102 13:36:08.673655     742 scope.go:117] "RemoveContainer" containerID="c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: I1102 13:36:08.674306     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: E1102 13:36:08.674962     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:09 old-k8s-version-054159 kubelet[742]: I1102 13:36:09.678422     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:09 old-k8s-version-054159 kubelet[742]: E1102 13:36:09.678850     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:10 old-k8s-version-054159 kubelet[742]: I1102 13:36:10.891326     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:10 old-k8s-version-054159 kubelet[742]: E1102 13:36:10.892066     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:19 old-k8s-version-054159 kubelet[742]: I1102 13:36:19.700842     742 scope.go:117] "RemoveContainer" containerID="d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.584825     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.711279     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.711603     742 scope.go:117] "RemoveContainer" containerID="92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: E1102 13:36:22.711964     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:30 old-k8s-version-054159 kubelet[742]: I1102 13:36:30.891705     742 scope.go:117] "RemoveContainer" containerID="92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	Nov 02 13:36:30 old-k8s-version-054159 kubelet[742]: E1102 13:36:30.891988     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:36:37 old-k8s-version-054159 kubelet[742]: I1102 13:36:37.814306     742 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: kubelet.service: Consumed 1.556s CPU time.
	
	
	==> kubernetes-dashboard [63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8] <==
	2025/11/02 13:36:04 Starting overwatch
	2025/11/02 13:36:04 Using namespace: kubernetes-dashboard
	2025/11/02 13:36:04 Using in-cluster config to connect to apiserver
	2025/11/02 13:36:04 Using secret token for csrf signing
	2025/11/02 13:36:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:36:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:36:04 Successful initial request to the apiserver, version: v1.28.0
	2025/11/02 13:36:04 Generating JWE encryption key
	2025/11/02 13:36:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:36:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:36:04 Initializing JWE encryption key from synchronized object
	2025/11/02 13:36:04 Creating in-cluster Sidecar client
	2025/11/02 13:36:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:36:04 Serving insecurely on HTTP port: 9090
	2025/11/02 13:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92] <==
	I1102 13:36:19.759601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:36:19.768904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:36:19.768955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 13:36:37.164313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:36:37.164471       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"299e1231-1111-4143-bceb-5c3455b6c833", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9 became leader
	I1102 13:36:37.164534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9!
	I1102 13:36:37.264900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9!
	
	
	==> storage-provisioner [d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e] <==
	I1102 13:35:49.047635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:36:19.052556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054159 -n old-k8s-version-054159: exit status 2 (331.71806ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054159
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	        "Created": "2025-11-02T13:34:24.271262498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:35:36.089492673Z",
	            "FinishedAt": "2025-11-02T13:35:34.238643958Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/hosts",
	        "LogPath": "/var/lib/docker/containers/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066/a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066-json.log",
	        "Name": "/old-k8s-version-054159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6f2405feedbf639595ab6b579a3012a32cfcc44ad5f1b658c49cebf32d06066",
	                "LowerDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65d0764cce8a31b0e0ae352074b365973802c793d1cc889a05870aa015e4971a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054159",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8572b7cb5f8f6a79da6e7886298653e60822c58ad51bf24bead535876c2dd7ab",
	            "SandboxKey": "/var/run/docker/netns/8572b7cb5f8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-054159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:1f:60:c0:6f:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4ae33975e63c84f4b70da6cb2d4c25dac69220c357b8926c3be9f60de4d8948a",
	                    "EndpointID": "b6db29e7266a2a7697cf14fff2206c7a9a27b9bc7db4e8cb37f1287734eaceda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054159",
	                        "a6f2405feedb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159: exit status 2 (367.741517ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054159 logs -n 25: (1.271999071s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-123357 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cri-dockerd --version                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo containerd config dump                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                        │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                        │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                          │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                              │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                          │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:36:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:36:42.209188  321355 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:42.209489  321355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:42.209501  321355 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:42.209508  321355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:42.209826  321355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:42.210374  321355 out.go:368] Setting JSON to false
	I1102 13:36:42.211991  321355 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4754,"bootTime":1762085848,"procs":401,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:36:42.212133  321355 start.go:143] virtualization: kvm guest
	I1102 13:36:42.214272  321355 out.go:179] * [no-preload-978795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:36:42.215525  321355 notify.go:221] Checking for updates...
	I1102 13:36:42.215538  321355 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:36:42.218542  321355 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:36:42.219730  321355 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:42.221640  321355 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:36:42.222831  321355 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:36:42.224210  321355 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Nov 02 13:36:07 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:07.71890446Z" level=info msg="Started container" PID=1759 containerID=289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper id=32bc803e-5a41-44cf-9e10-2ae76b157920 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e237d65aaf68e5b386b2de2ae8649085b5fa1a125c11d3702e140233b6557477
	Nov 02 13:36:08 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:08.678592015Z" level=info msg="Removing container: c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b" id=de1bf873-dad7-42e2-801b-741fbcd56d7a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:08 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:08.776127989Z" level=info msg="Removed container c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=de1bf873-dad7-42e2-801b-741fbcd56d7a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.701273481Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=18037022-2418-45f2-88c2-b2c861003e6a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.702123031Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e665b747-82d5-4e59-9aa7-a730294bf676 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.703118953Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=93b948ba-071d-450c-929f-774aadaae2a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.703243533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.707706554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.707948363Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3aced1212329af6f3832ec7611af0c1f43ce2c48f761d9d0a5bc83e31014d7bf/merged/etc/passwd: no such file or directory"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.70798145Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3aced1212329af6f3832ec7611af0c1f43ce2c48f761d9d0a5bc83e31014d7bf/merged/etc/group: no such file or directory"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.708257609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.741369174Z" level=info msg="Created container d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92: kube-system/storage-provisioner/storage-provisioner" id=93b948ba-071d-450c-929f-774aadaae2a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.741994427Z" level=info msg="Starting container: d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92" id=3c3257e4-b975-453e-8b5f-e1afc1fa67e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:19 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:19.744179723Z" level=info msg="Started container" PID=1773 containerID=d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92 description=kube-system/storage-provisioner/storage-provisioner id=3c3257e4-b975-453e-8b5f-e1afc1fa67e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e2dd81ed18b48fe929deae8946a052129bd8ff59139fb2534da90954ab8ab3a
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.585488492Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a8eac34-b1e2-49a0-aaa4-095db307fe96 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.58642057Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c1b173d4-cd92-4c3c-94c3-7d04da7fac8b name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.587441021Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=3e32dc8d-0c14-4c1c-9da2-385b2c9050d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.587587951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.593982332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.594625922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.620365074Z" level=info msg="Created container 92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=3e32dc8d-0c14-4c1c-9da2-385b2c9050d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.62097265Z" level=info msg="Starting container: 92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3" id=74d45c23-d247-41d5-b164-17c5b54b4ad6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.622691064Z" level=info msg="Started container" PID=1787 containerID=92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper id=74d45c23-d247-41d5-b164-17c5b54b4ad6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e237d65aaf68e5b386b2de2ae8649085b5fa1a125c11d3702e140233b6557477
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.712778349Z" level=info msg="Removing container: 289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e" id=93e984de-c828-4eb8-a125-7b57525e0172 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:36:22 old-k8s-version-054159 crio[592]: time="2025-11-02T13:36:22.722597507Z" level=info msg="Removed container 289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz/dashboard-metrics-scraper" id=93e984de-c828-4eb8-a125-7b57525e0172 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	92a13e0f84643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   e237d65aaf68e       dashboard-metrics-scraper-5f989dc9cf-7rbwz       kubernetes-dashboard
	d3d8c0b913276       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   3e2dd81ed18b4       storage-provisioner                              kube-system
	63d903272d477       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   7de88829d3dfe       kubernetes-dashboard-8694d4445c-4njq9            kubernetes-dashboard
	0f5b7a2354445       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   8e8ca21ae77e4       coredns-5dd5756b68-th5sb                         kube-system
	fbcb0e84c0b1e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   8e0cf11f10672       busybox                                          default
	8a9186d756f77       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   a6e0fdd4aeb07       kube-proxy-l2sh4                                 kube-system
	d9a922735c457       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   3e2dd81ed18b4       storage-provisioner                              kube-system
	7e383c881d9d3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   ca1a6529d8291       kindnet-cmgvz                                    kube-system
	c7536c075630a       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   1c70923cd8a06       kube-apiserver-old-k8s-version-054159            kube-system
	497de31f5a58a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   32b79629979f2       kube-controller-manager-old-k8s-version-054159   kube-system
	f4e2888d6cf26       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   a5b796afd764f       kube-scheduler-old-k8s-version-054159            kube-system
	2b6bce8320e43       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   a4d6f01c14502       etcd-old-k8s-version-054159                      kube-system
	
	
	==> coredns [0f5b7a2354445d7b753713e24f3d555c858e56162a36128edb30aa4306db7bdf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42509 - 64385 "HINFO IN 6759848681941079425.8368195505754998186. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019674026s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-054159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_34_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:34:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054159
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:36:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:36:18 +0000   Sun, 02 Nov 2025 13:35:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054159
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4d09a63f-c542-4c8f-a08b-d437451b349c
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-th5sb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-054159                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-cmgvz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-054159             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-054159    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-l2sh4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-054159             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-7rbwz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4njq9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node old-k8s-version-054159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-054159 event: Registered Node old-k8s-version-054159 in Controller
	  Normal  NodeReady                99s                kubelet          Node old-k8s-version-054159 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-054159 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-054159 event: Registered Node old-k8s-version-054159 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [2b6bce8320e430cccc1ee82606e722a96967aeb952b023a240569ca340578386] <==
	{"level":"info","ts":"2025-11-02T13:35:45.179422Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-02T13:35:45.179697Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-02T13:35:45.179782Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-02T13:35:45.179868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:35:45.179902Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T13:35:46.563938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T13:35:46.564137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.564167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T13:35:46.56507Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-054159 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T13:35:46.565329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T13:35:46.565381Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-02T13:35:46.565084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:35:46.566932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T13:35:46.569075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T13:35:46.574346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-02T13:36:09.839546Z","caller":"traceutil/trace.go:171","msg":"trace[1554839882] linearizableReadLoop","detail":"{readStateIndex:616; appliedIndex:614; }","duration":"127.400594ms","start":"2025-11-02T13:36:09.712125Z","end":"2025-11-02T13:36:09.839526Z","steps":["trace[1554839882] 'read index received'  (duration: 30.721317ms)","trace[1554839882] 'applied index is now lower than readState.Index'  (duration: 96.678406ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-02T13:36:09.839642Z","caller":"traceutil/trace.go:171","msg":"trace[1612820265] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"153.348876ms","start":"2025-11-02T13:36:09.686269Z","end":"2025-11-02T13:36:09.839617Z","steps":["trace[1612820265] 'process raft request'  (duration: 146.275003ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:36:09.839721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.600199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-02T13:36:09.839777Z","caller":"traceutil/trace.go:171","msg":"trace[574030839] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:587; }","duration":"127.679631ms","start":"2025-11-02T13:36:09.712088Z","end":"2025-11-02T13:36:09.839768Z","steps":["trace[574030839] 'agreement among raft nodes before linearized reading'  (duration: 127.564378ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T13:36:09.839782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.236424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-th5sb\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-11-02T13:36:09.839824Z","caller":"traceutil/trace.go:171","msg":"trace[1634754425] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-th5sb; range_end:; response_count:1; response_revision:587; }","duration":"112.287115ms","start":"2025-11-02T13:36:09.727525Z","end":"2025-11-02T13:36:09.839812Z","steps":["trace[1634754425] 'agreement among raft nodes before linearized reading'  (duration: 112.195553ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:36:43 up  1:19,  0 user,  load average: 4.19, 4.06, 2.61
	Linux old-k8s-version-054159 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e383c881d9d3abcb4f7e729e96b5ade0e93440c00e4cb2874f12cec251c4038] <==
	I1102 13:35:49.205014       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:35:49.205417       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:35:49.234973       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:35:49.235072       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:35:49.235102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:35:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:35:49.445439       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:35:49.445466       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:35:49.445476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:35:49.449625       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:35:49.802819       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:35:49.802859       1 metrics.go:72] Registering metrics
	I1102 13:35:49.802931       1 controller.go:711] "Syncing nftables rules"
	I1102 13:35:59.445130       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:35:59.445189       1 main.go:301] handling current node
	I1102 13:36:09.445764       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:09.445801       1 main.go:301] handling current node
	I1102 13:36:19.445857       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:19.445890       1 main.go:301] handling current node
	I1102 13:36:29.449676       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:29.449720       1 main.go:301] handling current node
	I1102 13:36:39.448631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 13:36:39.448668       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c7536c075630aaedcf682df764af6b60cd0fbd104f3182a0af9fb437ad59e8d1] <==
	I1102 13:35:48.146733       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1102 13:35:48.222505       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:35:48.249892       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:35:48.253795       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1102 13:35:48.253875       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1102 13:35:48.254205       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 13:35:48.255323       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1102 13:35:48.255365       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 13:35:48.255392       1 shared_informer.go:318] Caches are synced for configmaps
	I1102 13:35:48.255862       1 aggregator.go:166] initial CRD sync complete...
	I1102 13:35:48.255874       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 13:35:48.255881       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:35:48.255889       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:35:48.285040       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1102 13:35:49.150023       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:35:49.547735       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 13:35:49.586592       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 13:35:49.606861       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:35:49.614615       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:35:49.623382       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 13:35:49.663702       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.107.146"}
	I1102 13:35:49.675662       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.76.88"}
	I1102 13:36:00.531431       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1102 13:36:00.533682       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:00.640773       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [497de31f5a58afa435675d555e4a9181b9b73ba965821b91058ff9ca667f02b0] <==
	I1102 13:36:00.603839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.752506ms"
	I1102 13:36:00.603898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.267814ms"
	I1102 13:36:00.616402       1 shared_informer.go:318] Caches are synced for endpoint
	I1102 13:36:00.621186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.163978ms"
	I1102 13:36:00.623062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.759µs"
	I1102 13:36:00.629756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.787397ms"
	I1102 13:36:00.639929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.01µs"
	I1102 13:36:00.653506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.701139ms"
	I1102 13:36:00.653633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.647µs"
	I1102 13:36:00.664842       1 shared_informer.go:318] Caches are synced for persistent volume
	I1102 13:36:00.671116       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1102 13:36:00.750773       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:36:00.759658       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 13:36:01.097378       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:36:01.159891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 13:36:01.159998       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 13:36:04.694934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.959204ms"
	I1102 13:36:04.695287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.427µs"
	I1102 13:36:07.681177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.555µs"
	I1102 13:36:08.777632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.897µs"
	I1102 13:36:09.841747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.532µs"
	I1102 13:36:22.723269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.132µs"
	I1102 13:36:24.153803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.407701ms"
	I1102 13:36:24.154098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.362µs"
	I1102 13:36:30.902097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.713µs"
	
	
	==> kube-proxy [8a9186d756f777915b28f8dc5a47f88ab77f5894174f2be3aeb62d4d805d195e] <==
	I1102 13:35:49.102657       1 server_others.go:69] "Using iptables proxy"
	I1102 13:35:49.128932       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 13:35:49.189857       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:35:49.198280       1 server_others.go:152] "Using iptables Proxier"
	I1102 13:35:49.198330       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 13:35:49.198339       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 13:35:49.198366       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 13:35:49.198691       1 server.go:846] "Version info" version="v1.28.0"
	I1102 13:35:49.198711       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:35:49.199632       1 config.go:188] "Starting service config controller"
	I1102 13:35:49.199852       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 13:35:49.199798       1 config.go:97] "Starting endpoint slice config controller"
	I1102 13:35:49.200246       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 13:35:49.200134       1 config.go:315] "Starting node config controller"
	I1102 13:35:49.200791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 13:35:49.300665       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 13:35:49.300728       1 shared_informer.go:318] Caches are synced for service config
	I1102 13:35:49.301636       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f4e2888d6cf266f47dd2d8001b51e3862a9b00ef8f405bc0b2701e18774fefa9] <==
	I1102 13:35:45.660294       1 serving.go:348] Generated self-signed cert in-memory
	W1102 13:35:48.217793       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:35:48.217952       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:35:48.218012       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:35:48.218050       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:35:48.239249       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1102 13:35:48.239343       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:35:48.242871       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1102 13:35:48.242987       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1102 13:35:48.243083       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:35:48.244020       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1102 13:35:48.344824       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687204     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm77l\" (UniqueName: \"kubernetes.io/projected/072adefb-0813-4d34-9eab-d29bfbadd004-kube-api-access-sm77l\") pod \"kubernetes-dashboard-8694d4445c-4njq9\" (UID: \"072adefb-0813-4d34-9eab-d29bfbadd004\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687632     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/072adefb-0813-4d34-9eab-d29bfbadd004-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4njq9\" (UID: \"072adefb-0813-4d34-9eab-d29bfbadd004\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.687851     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ffbdffe-6393-4f5a-8891-f81dd96d8077-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-7rbwz\" (UID: \"3ffbdffe-6393-4f5a-8891-f81dd96d8077\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz"
	Nov 02 13:36:00 old-k8s-version-054159 kubelet[742]: I1102 13:36:00.688050     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9gw\" (UniqueName: \"kubernetes.io/projected/3ffbdffe-6393-4f5a-8891-f81dd96d8077-kube-api-access-hm9gw\") pod \"dashboard-metrics-scraper-5f989dc9cf-7rbwz\" (UID: \"3ffbdffe-6393-4f5a-8891-f81dd96d8077\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz"
	Nov 02 13:36:04 old-k8s-version-054159 kubelet[742]: I1102 13:36:04.685814     742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4njq9" podStartSLOduration=1.21630703 podCreationTimestamp="2025-11-02 13:36:00 +0000 UTC" firstStartedPulling="2025-11-02 13:36:00.928071255 +0000 UTC m=+16.447571918" lastFinishedPulling="2025-11-02 13:36:04.397509945 +0000 UTC m=+19.917010607" observedRunningTime="2025-11-02 13:36:04.682249772 +0000 UTC m=+20.201750466" watchObservedRunningTime="2025-11-02 13:36:04.685745719 +0000 UTC m=+20.205246393"
	Nov 02 13:36:07 old-k8s-version-054159 kubelet[742]: I1102 13:36:07.669740     742 scope.go:117] "RemoveContainer" containerID="c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: I1102 13:36:08.673655     742 scope.go:117] "RemoveContainer" containerID="c32581bf37bcf9c3b4770e76cb9eb0f6e4f60f2acfd97dee446194f8967d688b"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: I1102 13:36:08.674306     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:08 old-k8s-version-054159 kubelet[742]: E1102 13:36:08.674962     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:09 old-k8s-version-054159 kubelet[742]: I1102 13:36:09.678422     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:09 old-k8s-version-054159 kubelet[742]: E1102 13:36:09.678850     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:10 old-k8s-version-054159 kubelet[742]: I1102 13:36:10.891326     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:10 old-k8s-version-054159 kubelet[742]: E1102 13:36:10.892066     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:19 old-k8s-version-054159 kubelet[742]: I1102 13:36:19.700842     742 scope.go:117] "RemoveContainer" containerID="d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.584825     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.711279     742 scope.go:117] "RemoveContainer" containerID="289b5bfbd78c8694ffbed926f915ab9ec149a974e8e726fca6bddd0ce4c7dc8e"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: I1102 13:36:22.711603     742 scope.go:117] "RemoveContainer" containerID="92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	Nov 02 13:36:22 old-k8s-version-054159 kubelet[742]: E1102 13:36:22.711964     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:30 old-k8s-version-054159 kubelet[742]: I1102 13:36:30.891705     742 scope.go:117] "RemoveContainer" containerID="92a13e0f84643360be4963001510816bdc7b5b27c6864ec77bba9300c95138d3"
	Nov 02 13:36:30 old-k8s-version-054159 kubelet[742]: E1102 13:36:30.891988     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-7rbwz_kubernetes-dashboard(3ffbdffe-6393-4f5a-8891-f81dd96d8077)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-7rbwz" podUID="3ffbdffe-6393-4f5a-8891-f81dd96d8077"
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:36:37 old-k8s-version-054159 kubelet[742]: I1102 13:36:37.814306     742 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:36:37 old-k8s-version-054159 systemd[1]: kubelet.service: Consumed 1.556s CPU time.
	
	
	==> kubernetes-dashboard [63d903272d477fb68ba2d81cb506487138b802b6f4046fe565cd6fdbf5dbdfd8] <==
	2025/11/02 13:36:04 Using namespace: kubernetes-dashboard
	2025/11/02 13:36:04 Using in-cluster config to connect to apiserver
	2025/11/02 13:36:04 Using secret token for csrf signing
	2025/11/02 13:36:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:36:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:36:04 Successful initial request to the apiserver, version: v1.28.0
	2025/11/02 13:36:04 Generating JWE encryption key
	2025/11/02 13:36:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:36:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:36:04 Initializing JWE encryption key from synchronized object
	2025/11/02 13:36:04 Creating in-cluster Sidecar client
	2025/11/02 13:36:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:36:04 Serving insecurely on HTTP port: 9090
	2025/11/02 13:36:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:36:04 Starting overwatch
	
	
	==> storage-provisioner [d3d8c0b91327679fe6df52e922370a8fc3ad0e272b5b5c843aec0802d74ddb92] <==
	I1102 13:36:19.759601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:36:19.768904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:36:19.768955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 13:36:37.164313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:36:37.164471       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"299e1231-1111-4143-bceb-5c3455b6c833", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9 became leader
	I1102 13:36:37.164534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9!
	I1102 13:36:37.264900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054159_0332c970-af4f-4d5e-b52f-2e0ddf0733b9!
	
	
	==> storage-provisioner [d9a922735c457be31b0651d808a21f81fa2076a48b587879e7f67d257541ca7e] <==
	I1102 13:35:49.047635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:36:19.052556       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054159 -n old-k8s-version-054159
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054159 -n old-k8s-version-054159: exit status 2 (339.97922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.216692ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-748183 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-748183 describe deploy/metrics-server -n kube-system: exit status 1 (64.279416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-748183 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-748183
helpers_test.go:243: (dbg) docker inspect embed-certs-748183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	        "Created": "2025-11-02T13:35:52.708051752Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:35:52.744937525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hosts",
	        "LogPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6-json.log",
	        "Name": "/embed-certs-748183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-748183:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-748183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	                "LowerDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-748183",
	                "Source": "/var/lib/docker/volumes/embed-certs-748183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-748183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-748183",
	                "name.minikube.sigs.k8s.io": "embed-certs-748183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2acc48cbf516aec53d0c9049a12e2387178410a9f118cae2ca524129824c9163",
	            "SandboxKey": "/var/run/docker/netns/2acc48cbf516",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-748183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:5e:3a:4c:79:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e27916e6204d80d9f3ecde4dc1f7e05cab435dec08a0139421fe16b2b896e8b",
	                    "EndpointID": "ac0921b0350a8358619bef8a7cd56ded3f4d94f415e16a620cafadf692a636e1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-748183",
	                        "a897616b7925"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25: (1.067296176s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-123357 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cri-dockerd --version                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ ssh     │ -p bridge-123357 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo containerd config dump                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                        │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                        │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                          │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                              │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                          │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:36:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:36:42.209188  321355 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:42.209489  321355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:42.209501  321355 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:42.209508  321355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:42.209826  321355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:42.210374  321355 out.go:368] Setting JSON to false
	I1102 13:36:42.211991  321355 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4754,"bootTime":1762085848,"procs":401,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:36:42.212133  321355 start.go:143] virtualization: kvm guest
	I1102 13:36:42.214272  321355 out.go:179] * [no-preload-978795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:36:42.215525  321355 notify.go:221] Checking for updates...
	I1102 13:36:42.215538  321355 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:36:42.218542  321355 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:36:42.219730  321355 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:42.221640  321355 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:36:42.222831  321355 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:36:42.224210  321355 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:36:42.225890  321355 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:42.226432  321355 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:36:42.252641  321355 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:36:42.252792  321355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:42.322477  321355 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-02 13:36:42.309894733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:42.322655  321355 docker.go:319] overlay module found
	I1102 13:36:42.325314  321355 out.go:179] * Using the docker driver based on existing profile
	I1102 13:36:42.326314  321355 start.go:309] selected driver: docker
	I1102 13:36:42.326334  321355 start.go:930] validating driver "docker" against &{Name:no-preload-978795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-978795 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:42.326440  321355 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:36:42.327206  321355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:42.391366  321355 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-02 13:36:42.379553622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:42.391806  321355 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:42.391869  321355 cni.go:84] Creating CNI manager for ""
	I1102 13:36:42.391944  321355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:42.392011  321355 start.go:353] cluster config:
	{Name:no-preload-978795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-978795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:42.394430  321355 out.go:179] * Starting "no-preload-978795" primary control-plane node in "no-preload-978795" cluster
	I1102 13:36:42.395492  321355 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:36:42.396648  321355 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:36:42.397794  321355 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:42.397894  321355 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:36:42.397932  321355 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/config.json ...
	I1102 13:36:42.398116  321355 cache.go:107] acquiring lock: {Name:mk8e402672f64678c0b515911aa32ed44fc66f68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398155  321355 cache.go:107] acquiring lock: {Name:mk4022fa878257759556d7b9e87908af05c1e797 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398214  321355 cache.go:107] acquiring lock: {Name:mk1bec501e032175fef4b4277c9e238a5e7b6e78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398226  321355 cache.go:107] acquiring lock: {Name:mkc815430ce9c58af194a42e95d81e0d93b7db76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398265  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1102 13:36:42.398285  321355 cache.go:107] acquiring lock: {Name:mkbd485562ce0b1d41d736a1020c7b3cd44160e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398289  321355 cache.go:107] acquiring lock: {Name:mk0de785487dbe217edf98a3c57781cc98ede277 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398301  321355 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 153.52µs
	I1102 13:36:42.398320  321355 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1102 13:36:42.398325  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1102 13:36:42.398334  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1102 13:36:42.398338  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1102 13:36:42.398341  321355 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 117.711µs
	I1102 13:36:42.398341  321355 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 244.105µs
	I1102 13:36:42.398350  321355 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1102 13:36:42.398352  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1102 13:36:42.398354  321355 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1102 13:36:42.398348  321355 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 61.875µs
	I1102 13:36:42.398370  321355 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1102 13:36:42.398360  321355 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 79.652µs
	I1102 13:36:42.398373  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1102 13:36:42.398378  321355 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1102 13:36:42.398382  321355 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 227.907µs
	I1102 13:36:42.398390  321355 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1102 13:36:42.398365  321355 cache.go:107] acquiring lock: {Name:mk787aa6971f409fe8de89fb4136a6f45397d8c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398123  321355 cache.go:107] acquiring lock: {Name:mke6361da88cce13c42d14fccdf9982a1c87ec29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.398425  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1102 13:36:42.398432  321355 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 69.155µs
	I1102 13:36:42.398440  321355 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1102 13:36:42.398452  321355 cache.go:115] /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1102 13:36:42.398459  321355 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 344.095µs
	I1102 13:36:42.398478  321355 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1102 13:36:42.398495  321355 cache.go:87] Successfully saved all images to host disk.
	I1102 13:36:42.421881  321355 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:36:42.421899  321355 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:36:42.421920  321355 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:36:42.421955  321355 start.go:360] acquireMachinesLock for no-preload-978795: {Name:mka481969dbed58deef62e224bbf748810b6a483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:42.422010  321355 start.go:364] duration metric: took 33.849µs to acquireMachinesLock for "no-preload-978795"
	I1102 13:36:42.422027  321355 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:36:42.422034  321355 fix.go:54] fixHost starting: 
	I1102 13:36:42.422287  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:42.441084  321355 fix.go:112] recreateIfNeeded on no-preload-978795: state=Stopped err=<nil>
	W1102 13:36:42.441121  321355 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:36:31 embed-certs-748183 crio[805]: time="2025-11-02T13:36:31.700395726Z" level=info msg="Starting container: 558e0ba72d2546c5d8b20a0b2902ea025642aed9aaa9f1a9b1c2eebb0bfc70cd" id=c017eaea-5e19-4aa2-a4f3-6303f8f5346b name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:31 embed-certs-748183 crio[805]: time="2025-11-02T13:36:31.702298013Z" level=info msg="Started container" PID=1878 containerID=558e0ba72d2546c5d8b20a0b2902ea025642aed9aaa9f1a9b1c2eebb0bfc70cd description=kube-system/coredns-66bc5c9577-vpq66/coredns id=c017eaea-5e19-4aa2-a4f3-6303f8f5346b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9f1f0dd0c1d34a3057a8665c419f037bc836627cf600464462fac4ac0c10963
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.323980684Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0c363527-679b-4eab-8ca2-4dc923d1a401 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.324085815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.328882121Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc208b5d480c8f6fdecd0165da57609f2c1949c1249d63b65a75c88317b83b1d UID:d262d7c9-f896-4859-90ca-aa663e85851a NetNS:/var/run/netns/799afdb1-b452-453c-bab7-45ec1e2fe225 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e2c330}] Aliases:map[]}"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.328920295Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.339738776Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cc208b5d480c8f6fdecd0165da57609f2c1949c1249d63b65a75c88317b83b1d UID:d262d7c9-f896-4859-90ca-aa663e85851a NetNS:/var/run/netns/799afdb1-b452-453c-bab7-45ec1e2fe225 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e2c330}] Aliases:map[]}"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.339858295Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.340672004Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.341415054Z" level=info msg="Ran pod sandbox cc208b5d480c8f6fdecd0165da57609f2c1949c1249d63b65a75c88317b83b1d with infra container: default/busybox/POD" id=0c363527-679b-4eab-8ca2-4dc923d1a401 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.342573384Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63d68bc3-b642-45f7-9c9a-bb984cd95430 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.342666467Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=63d68bc3-b642-45f7-9c9a-bb984cd95430 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.342699605Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=63d68bc3-b642-45f7-9c9a-bb984cd95430 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.343396774Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00caf9a5-f07b-4598-b61d-84a11110a80b name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:34 embed-certs-748183 crio[805]: time="2025-11-02T13:36:34.344978436Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.752508977Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=00caf9a5-f07b-4598-b61d-84a11110a80b name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.753299626Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26ae230f-f0e6-4434-89cd-c847dc7dd2ed name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.754555371Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=849f9b1b-e679-4eb0-b7c7-07439d91575d name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.757692368Z" level=info msg="Creating container: default/busybox/busybox" id=7d82a03c-49b9-43fe-b6b6-08b7acba35b1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.757794935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.761533294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.761925938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.786416556Z" level=info msg="Created container 248566cef851f25b3d8be9078d92ae455e7ba1b0ca609201b462c61a0279ea24: default/busybox/busybox" id=7d82a03c-49b9-43fe-b6b6-08b7acba35b1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.786994612Z" level=info msg="Starting container: 248566cef851f25b3d8be9078d92ae455e7ba1b0ca609201b462c61a0279ea24" id=ddcb4b93-e7e3-4a6d-8902-c3717ee26e2c name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:35 embed-certs-748183 crio[805]: time="2025-11-02T13:36:35.788534792Z" level=info msg="Started container" PID=1950 containerID=248566cef851f25b3d8be9078d92ae455e7ba1b0ca609201b462c61a0279ea24 description=default/busybox/busybox id=ddcb4b93-e7e3-4a6d-8902-c3717ee26e2c name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc208b5d480c8f6fdecd0165da57609f2c1949c1249d63b65a75c88317b83b1d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	248566cef851f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   cc208b5d480c8       busybox                                      default
	558e0ba72d254       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   a9f1f0dd0c1d3       coredns-66bc5c9577-vpq66                     kube-system
	d8a66dc3460ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   14eee5af78065       storage-provisioner                          kube-system
	f4fc7fba9ee42       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   ae5238062ae6b       kindnet-9zwww                                kube-system
	af3948769fb1a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   c3f0b92d4045f       kube-proxy-pg8nt                             kube-system
	1ef35701c6e5c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   7c12dcd896f3d       kube-controller-manager-embed-certs-748183   kube-system
	a720474c582e8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   b1dddbf9e9f03       kube-apiserver-embed-certs-748183            kube-system
	b392b30a2b922       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   576ea9853ab36       kube-scheduler-embed-certs-748183            kube-system
	6f523aad768bb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   926953f1a04c2       etcd-embed-certs-748183                      kube-system
	
	
	==> coredns [558e0ba72d2546c5d8b20a0b2902ea025642aed9aaa9f1a9b1c2eebb0bfc70cd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39891 - 30334 "HINFO IN 2663296786869753545.8463147603692863844. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016169908s
	
	
	==> describe nodes <==
	Name:               embed-certs-748183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-748183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-748183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-748183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:36:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:36:31 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:36:31 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:36:31 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:36:31 +0000   Sun, 02 Nov 2025 13:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-748183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b44fc6a8-f48d-4728-a7f6-4178f12db103
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-vpq66                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-748183                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-9zwww                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-748183             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-748183    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-pg8nt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-748183             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-748183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-748183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-748183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-748183 event: Registered Node embed-certs-748183 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-748183 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [6f523aad768bb1504178747d772f4d5b286c2cd860675f82228213a118ffc130] <==
	{"level":"warn","ts":"2025-11-02T13:36:11.196402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.203294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.209541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.227489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.239259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.247368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.254934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.261466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.268228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.275364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.281740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.295244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.302546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.316222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.322897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.329655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.336446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.342256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.349372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.356383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.375789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.379081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.385377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.392044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:11.441429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:36:43 up  1:19,  0 user,  load average: 4.19, 4.06, 2.61
	Linux embed-certs-748183 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4fc7fba9ee42a264fe7fae84891d06c779271a437ba65c4412cdcc9db07e419] <==
	I1102 13:36:20.820115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:36:20.820444       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1102 13:36:20.820634       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:36:20.820654       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:36:20.820688       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:36:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:36:20.928204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:36:20.928234       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:36:20.928245       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:36:20.928385       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:36:21.220045       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:36:21.220105       1 metrics.go:72] Registering metrics
	I1102 13:36:21.230810       1 controller.go:711] "Syncing nftables rules"
	I1102 13:36:30.931923       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:36:30.931980       1 main.go:301] handling current node
	I1102 13:36:40.931959       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:36:40.932006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a720474c582e83a82527fe88ce6e48dc82ad6bfb4ad29ad04520eb3ecf8d7d5f] <==
	I1102 13:36:11.894948       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:36:11.894969       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:11.899618       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:11.899741       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:36:11.901873       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 13:36:11.904958       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:36:12.088497       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:36:12.795711       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:36:12.799386       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:36:12.799407       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:36:13.259756       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:36:13.295625       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:36:13.399038       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:36:13.404612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1102 13:36:13.405664       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:36:13.409258       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:13.812553       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:36:14.253442       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:36:14.263183       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:36:14.272447       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:36:19.165852       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:36:19.564826       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:36:19.666676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:19.670205       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1102 13:36:42.127543       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:36390: use of closed network connection
	
	
	==> kube-controller-manager [1ef35701c6e5c86d818f3983674cb6864e8adce719fe7b7a1dc979a60b730a59] <==
	I1102 13:36:18.811842       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:36:18.811860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:18.811874       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:36:18.811882       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:36:18.812074       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:36:18.812198       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:36:18.812553       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:36:18.812609       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:36:18.812905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:36:18.812936       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:36:18.813000       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:36:18.813025       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:36:18.813029       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:36:18.813096       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:36:18.813110       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:36:18.813318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:36:18.813778       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:36:18.814985       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:36:18.818253       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:18.818259       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:36:18.820559       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:18.831879       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:36:18.835106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:18.837219       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:36:33.763872       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [af3948769fb1ad7bc3586ec39e6d5d6e4a4aff29457a2b6699e19ae37f5cb1b3] <==
	I1102 13:36:20.580266       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:36:20.670260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:36:20.770675       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:36:20.770714       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1102 13:36:20.770806       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:36:20.789625       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:36:20.789682       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:36:20.794756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:36:20.795102       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:36:20.795134       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:20.796497       1 config.go:200] "Starting service config controller"
	I1102 13:36:20.796533       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:36:20.796578       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:36:20.796594       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:36:20.796616       1 config.go:309] "Starting node config controller"
	I1102 13:36:20.796622       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:36:20.796622       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:36:20.796650       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:36:20.896761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:36:20.896790       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:36:20.896804       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:36:20.896825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b392b30a2b92283dcf90cb688cd171f3b096b39faf37584e71cc5ea2a56ff47c] <==
	E1102 13:36:11.841313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:36:11.841833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:36:11.841878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:36:11.841896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:36:11.841905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:36:11.841927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:36:11.841918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:36:11.841959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:36:11.841997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:36:11.842031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:36:11.842110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:36:11.842123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:36:11.842168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:36:12.686973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:36:12.731397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:36:12.782904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:36:12.788036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:36:12.838652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:36:12.881787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:36:12.894916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 13:36:12.904970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:36:13.053078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:36:13.063084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:36:13.071233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1102 13:36:13.438140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.628865    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d29cb5a-067d-48b9-b7d0-aa53c7388404-xtables-lock\") pod \"kindnet-9zwww\" (UID: \"5d29cb5a-067d-48b9-b7d0-aa53c7388404\") " pod="kube-system/kindnet-9zwww"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.628916    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksvcq\" (UniqueName: \"kubernetes.io/projected/5d29cb5a-067d-48b9-b7d0-aa53c7388404-kube-api-access-ksvcq\") pod \"kindnet-9zwww\" (UID: \"5d29cb5a-067d-48b9-b7d0-aa53c7388404\") " pod="kube-system/kindnet-9zwww"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.628973    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2tz8\" (UniqueName: \"kubernetes.io/projected/77fcda6d-78b8-4676-8c99-dfc0395b397e-kube-api-access-b2tz8\") pod \"kube-proxy-pg8nt\" (UID: \"77fcda6d-78b8-4676-8c99-dfc0395b397e\") " pod="kube-system/kube-proxy-pg8nt"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.629016    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77fcda6d-78b8-4676-8c99-dfc0395b397e-kube-proxy\") pod \"kube-proxy-pg8nt\" (UID: \"77fcda6d-78b8-4676-8c99-dfc0395b397e\") " pod="kube-system/kube-proxy-pg8nt"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.629033    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77fcda6d-78b8-4676-8c99-dfc0395b397e-xtables-lock\") pod \"kube-proxy-pg8nt\" (UID: \"77fcda6d-78b8-4676-8c99-dfc0395b397e\") " pod="kube-system/kube-proxy-pg8nt"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.629047    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77fcda6d-78b8-4676-8c99-dfc0395b397e-lib-modules\") pod \"kube-proxy-pg8nt\" (UID: \"77fcda6d-78b8-4676-8c99-dfc0395b397e\") " pod="kube-system/kube-proxy-pg8nt"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.629068    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5d29cb5a-067d-48b9-b7d0-aa53c7388404-cni-cfg\") pod \"kindnet-9zwww\" (UID: \"5d29cb5a-067d-48b9-b7d0-aa53c7388404\") " pod="kube-system/kindnet-9zwww"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: I1102 13:36:19.629080    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29cb5a-067d-48b9-b7d0-aa53c7388404-lib-modules\") pod \"kindnet-9zwww\" (UID: \"5d29cb5a-067d-48b9-b7d0-aa53c7388404\") " pod="kube-system/kindnet-9zwww"
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737291    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737333    1344 projected.go:196] Error preparing data for projected volume kube-api-access-ksvcq for pod kube-system/kindnet-9zwww: configmap "kube-root-ca.crt" not found
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737421    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d29cb5a-067d-48b9-b7d0-aa53c7388404-kube-api-access-ksvcq podName:5d29cb5a-067d-48b9-b7d0-aa53c7388404 nodeName:}" failed. No retries permitted until 2025-11-02 13:36:20.237381617 +0000 UTC m=+6.225371613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ksvcq" (UniqueName: "kubernetes.io/projected/5d29cb5a-067d-48b9-b7d0-aa53c7388404-kube-api-access-ksvcq") pod "kindnet-9zwww" (UID: "5d29cb5a-067d-48b9-b7d0-aa53c7388404") : configmap "kube-root-ca.crt" not found
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737294    1344 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737446    1344 projected.go:196] Error preparing data for projected volume kube-api-access-b2tz8 for pod kube-system/kube-proxy-pg8nt: configmap "kube-root-ca.crt" not found
	Nov 02 13:36:19 embed-certs-748183 kubelet[1344]: E1102 13:36:19.737469    1344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/77fcda6d-78b8-4676-8c99-dfc0395b397e-kube-api-access-b2tz8 podName:77fcda6d-78b8-4676-8c99-dfc0395b397e nodeName:}" failed. No retries permitted until 2025-11-02 13:36:20.237462002 +0000 UTC m=+6.225451999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b2tz8" (UniqueName: "kubernetes.io/projected/77fcda6d-78b8-4676-8c99-dfc0395b397e-kube-api-access-b2tz8") pod "kube-proxy-pg8nt" (UID: "77fcda6d-78b8-4676-8c99-dfc0395b397e") : configmap "kube-root-ca.crt" not found
	Nov 02 13:36:21 embed-certs-748183 kubelet[1344]: I1102 13:36:21.144243    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9zwww" podStartSLOduration=2.144223974 podStartE2EDuration="2.144223974s" podCreationTimestamp="2025-11-02 13:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:21.144160711 +0000 UTC m=+7.132150728" watchObservedRunningTime="2025-11-02 13:36:21.144223974 +0000 UTC m=+7.132213990"
	Nov 02 13:36:23 embed-certs-748183 kubelet[1344]: I1102 13:36:23.826675    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pg8nt" podStartSLOduration=4.826651597 podStartE2EDuration="4.826651597s" podCreationTimestamp="2025-11-02 13:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:21.153372274 +0000 UTC m=+7.141362291" watchObservedRunningTime="2025-11-02 13:36:23.826651597 +0000 UTC m=+9.814641614"
	Nov 02 13:36:31 embed-certs-748183 kubelet[1344]: I1102 13:36:31.312458    1344 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 13:36:31 embed-certs-748183 kubelet[1344]: I1102 13:36:31.417488    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcl5c\" (UniqueName: \"kubernetes.io/projected/c7a07ab4-2946-460b-92f5-8b648ed13a68-kube-api-access-pcl5c\") pod \"storage-provisioner\" (UID: \"c7a07ab4-2946-460b-92f5-8b648ed13a68\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:31 embed-certs-748183 kubelet[1344]: I1102 13:36:31.417551    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cee8886-e5d7-42dd-a915-93e05be996a9-config-volume\") pod \"coredns-66bc5c9577-vpq66\" (UID: \"7cee8886-e5d7-42dd-a915-93e05be996a9\") " pod="kube-system/coredns-66bc5c9577-vpq66"
	Nov 02 13:36:31 embed-certs-748183 kubelet[1344]: I1102 13:36:31.417641    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c7a07ab4-2946-460b-92f5-8b648ed13a68-tmp\") pod \"storage-provisioner\" (UID: \"c7a07ab4-2946-460b-92f5-8b648ed13a68\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:31 embed-certs-748183 kubelet[1344]: I1102 13:36:31.417723    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7m4c\" (UniqueName: \"kubernetes.io/projected/7cee8886-e5d7-42dd-a915-93e05be996a9-kube-api-access-s7m4c\") pod \"coredns-66bc5c9577-vpq66\" (UID: \"7cee8886-e5d7-42dd-a915-93e05be996a9\") " pod="kube-system/coredns-66bc5c9577-vpq66"
	Nov 02 13:36:32 embed-certs-748183 kubelet[1344]: I1102 13:36:32.170231    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.170207725 podStartE2EDuration="12.170207725s" podCreationTimestamp="2025-11-02 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:32.169982551 +0000 UTC m=+18.157972569" watchObservedRunningTime="2025-11-02 13:36:32.170207725 +0000 UTC m=+18.158197742"
	Nov 02 13:36:32 embed-certs-748183 kubelet[1344]: I1102 13:36:32.182779    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vpq66" podStartSLOduration=12.182750318 podStartE2EDuration="12.182750318s" podCreationTimestamp="2025-11-02 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:32.182043502 +0000 UTC m=+18.170033520" watchObservedRunningTime="2025-11-02 13:36:32.182750318 +0000 UTC m=+18.170740335"
	Nov 02 13:36:34 embed-certs-748183 kubelet[1344]: I1102 13:36:34.036542    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xz5b\" (UniqueName: \"kubernetes.io/projected/d262d7c9-f896-4859-90ca-aa663e85851a-kube-api-access-6xz5b\") pod \"busybox\" (UID: \"d262d7c9-f896-4859-90ca-aa663e85851a\") " pod="default/busybox"
	Nov 02 13:36:36 embed-certs-748183 kubelet[1344]: I1102 13:36:36.180490    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.769437585 podStartE2EDuration="2.180467387s" podCreationTimestamp="2025-11-02 13:36:34 +0000 UTC" firstStartedPulling="2025-11-02 13:36:34.343009128 +0000 UTC m=+20.330999124" lastFinishedPulling="2025-11-02 13:36:35.754038917 +0000 UTC m=+21.742028926" observedRunningTime="2025-11-02 13:36:36.18042354 +0000 UTC m=+22.168413557" watchObservedRunningTime="2025-11-02 13:36:36.180467387 +0000 UTC m=+22.168457406"
	
	
	==> storage-provisioner [d8a66dc3460ffa1860fc514a3cefab667318125aa11c34c237689af17142c021] <==
	I1102 13:36:31.708896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:36:31.717974       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:36:31.718012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:36:31.719952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:31.725226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:31.725378       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:36:31.725517       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_22daf4b7-d93f-4786-be1b-36108c1e8474!
	I1102 13:36:31.725519       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5836755-5bb9-4f0c-9c57-d7cfd1b93802", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-748183_22daf4b7-d93f-4786-be1b-36108c1e8474 became leader
	W1102 13:36:31.727216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:31.732120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:31.826523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_22daf4b7-d93f-4786-be1b-36108c1e8474!
	W1102 13:36:33.735308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:33.739266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:35.742133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:35.747140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:37.750325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:37.754456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:39.757697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:39.765757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:41.769497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:41.773959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:43.777282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:43.782978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-748183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (248.891925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-538419 describe deploy/metrics-server -n kube-system: exit status 1 (61.548972ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-538419 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538419
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538419:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	        "Created": "2025-11-02T13:36:10.354191788Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:36:10.38737043Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hosts",
	        "LogPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2-json.log",
	        "Name": "/default-k8s-diff-port-538419",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538419:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-538419",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	                "LowerDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538419",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538419/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538419",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "613365c666669bb9c8ef56d84a9b390f3780813e0616d3c9f8a9140ea30e7914",
	            "SandboxKey": "/var/run/docker/netns/613365c66666",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538419": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e0:79:36:95:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a5177e2530dcf8dba1a46a1c8708fe51c8cc64912038433c6196e6d34da5a5b",
	                    "EndpointID": "f89cc23e07f98f20dba475f4af62a0228444117ce11e680f9a15f6e8325d12d6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538419",
	                        "922c5d262078"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25: (1.235438617s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-123357 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo containerd config dump                                                                                                                                                                                                  │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                                                                                                  │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                                                                                               │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:36:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:36:46.941815  324005 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:36:46.942113  324005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:46.942124  324005 out.go:374] Setting ErrFile to fd 2...
	I1102 13:36:46.942130  324005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:36:46.942354  324005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:36:46.942881  324005 out.go:368] Setting JSON to false
	I1102 13:36:46.944466  324005 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4759,"bootTime":1762085848,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:36:46.944541  324005 start.go:143] virtualization: kvm guest
	I1102 13:36:46.947123  324005 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:36:46.948469  324005 notify.go:221] Checking for updates...
	I1102 13:36:46.948478  324005 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:36:46.949753  324005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:36:46.951062  324005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:46.952226  324005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:36:46.953534  324005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:36:46.954736  324005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:36:46.956331  324005 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:46.956499  324005 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:46.956665  324005 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:46.956773  324005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:36:46.984896  324005 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:36:46.985018  324005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:47.041875  324005 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-02 13:36:47.031621119 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:47.042046  324005 docker.go:319] overlay module found
	I1102 13:36:47.043854  324005 out.go:179] * Using the docker driver based on user configuration
	I1102 13:36:47.044957  324005 start.go:309] selected driver: docker
	I1102 13:36:47.044977  324005 start.go:930] validating driver "docker" against <nil>
	I1102 13:36:47.045001  324005 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:36:47.045708  324005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:36:47.106830  324005 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-02 13:36:47.096669026 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:36:47.106994  324005 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1102 13:36:47.107018  324005 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1102 13:36:47.107266  324005 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:36:47.109498  324005 out.go:179] * Using Docker driver with root privileges
	I1102 13:36:47.110605  324005 cni.go:84] Creating CNI manager for ""
	I1102 13:36:47.110677  324005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:47.110688  324005 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:36:47.110763  324005 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:47.111985  324005 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:36:47.113378  324005 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:36:47.115972  324005 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:36:47.120014  324005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:47.120049  324005 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:36:47.120056  324005 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:36:47.120135  324005 cache.go:59] Caching tarball of preloaded images
	I1102 13:36:47.120229  324005 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:36:47.120256  324005 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:36:47.120454  324005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:36:47.120516  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json: {Name:mk78e0c09789df2c441058f1d3b758e8192f7bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:47.142671  324005 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:36:47.142690  324005 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:36:47.142703  324005 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:36:47.142729  324005 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:36:47.142807  324005 start.go:364] duration metric: took 62.381µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:36:47.142829  324005 start.go:93] Provisioning new machine with config: &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:47.142903  324005 start.go:125] createHost starting for "" (driver="docker")
	I1102 13:36:42.445681  321355 out.go:252] * Restarting existing docker container for "no-preload-978795" ...
	I1102 13:36:42.445776  321355 cli_runner.go:164] Run: docker start no-preload-978795
	I1102 13:36:42.718644  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:42.741177  321355 kic.go:430] container "no-preload-978795" state is running.
	I1102 13:36:42.741762  321355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978795
	I1102 13:36:42.764673  321355 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/config.json ...
	I1102 13:36:42.764918  321355 machine.go:94] provisionDockerMachine start ...
	I1102 13:36:42.764980  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:42.785677  321355 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:42.785922  321355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1102 13:36:42.785938  321355 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:36:42.786585  321355 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57246->127.0.0.1:33115: read: connection reset by peer
	I1102 13:36:45.934614  321355 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-978795
	
	I1102 13:36:45.934642  321355 ubuntu.go:182] provisioning hostname "no-preload-978795"
	I1102 13:36:45.934703  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:45.956119  321355 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:45.956420  321355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1102 13:36:45.956440  321355 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-978795 && echo "no-preload-978795" | sudo tee /etc/hostname
	I1102 13:36:46.111053  321355 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-978795
	
	I1102 13:36:46.111115  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:46.130694  321355 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:46.130915  321355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1102 13:36:46.130933  321355 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-978795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-978795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-978795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:36:46.272230  321355 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:36:46.272275  321355 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:36:46.272312  321355 ubuntu.go:190] setting up certificates
	I1102 13:36:46.272333  321355 provision.go:84] configureAuth start
	I1102 13:36:46.272414  321355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978795
	I1102 13:36:46.291086  321355 provision.go:143] copyHostCerts
	I1102 13:36:46.291151  321355 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:36:46.291166  321355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:36:46.291219  321355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:36:46.291365  321355 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:36:46.291379  321355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:36:46.291419  321355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:36:46.291524  321355 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:36:46.291536  321355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:36:46.291597  321355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:36:46.291685  321355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.no-preload-978795 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-978795]
	I1102 13:36:46.655365  321355 provision.go:177] copyRemoteCerts
	I1102 13:36:46.655415  321355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:36:46.655448  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:46.674986  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:46.776840  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:36:46.795627  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 13:36:46.813113  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:36:46.831812  321355 provision.go:87] duration metric: took 559.46386ms to configureAuth
	I1102 13:36:46.831842  321355 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:36:46.832034  321355 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:46.832173  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:46.851050  321355 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:46.851338  321355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33115 <nil> <nil>}
	I1102 13:36:46.851369  321355 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:36:47.185205  321355 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:36:47.185235  321355 machine.go:97] duration metric: took 4.420307162s to provisionDockerMachine
	I1102 13:36:47.185250  321355 start.go:293] postStartSetup for "no-preload-978795" (driver="docker")
	I1102 13:36:47.185263  321355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:36:47.185329  321355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:36:47.185383  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:47.206682  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:45.496506  314692 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:45.496537  314692 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:45.496542  314692 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running
	I1102 13:36:45.496547  314692 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:36:45.496551  314692 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running
	I1102 13:36:45.496555  314692 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running
	I1102 13:36:45.496560  314692 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:36:45.496582  314692 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running
	I1102 13:36:45.496595  314692 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:36:45.496613  314692 retry.go:31] will retry after 557.22013ms: missing components: kube-dns
	I1102 13:36:46.058762  314692 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:46.058796  314692 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running
	I1102 13:36:46.058804  314692 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running
	I1102 13:36:46.058811  314692 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:36:46.058817  314692 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running
	I1102 13:36:46.058822  314692 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running
	I1102 13:36:46.058827  314692 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:36:46.058833  314692 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running
	I1102 13:36:46.058838  314692 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:36:46.058848  314692 system_pods.go:126] duration metric: took 1.555896874s to wait for k8s-apps to be running ...
	I1102 13:36:46.058861  314692 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:36:46.058910  314692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:46.073599  314692 system_svc.go:56] duration metric: took 14.720127ms WaitForService to wait for kubelet
	I1102 13:36:46.073640  314692 kubeadm.go:587] duration metric: took 12.879727744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:46.073668  314692 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:36:46.077076  314692 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:36:46.077109  314692 node_conditions.go:123] node cpu capacity is 8
	I1102 13:36:46.077126  314692 node_conditions.go:105] duration metric: took 3.452322ms to run NodePressure ...
	I1102 13:36:46.077139  314692 start.go:242] waiting for startup goroutines ...
	I1102 13:36:46.077148  314692 start.go:247] waiting for cluster config update ...
	I1102 13:36:46.077164  314692 start.go:256] writing updated cluster config ...
	I1102 13:36:46.077445  314692 ssh_runner.go:195] Run: rm -f paused
	I1102 13:36:46.081554  314692 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:46.085347  314692 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.089469  314692 pod_ready.go:94] pod "coredns-66bc5c9577-4xsxx" is "Ready"
	I1102 13:36:46.089492  314692 pod_ready.go:86] duration metric: took 4.116759ms for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.091766  314692 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.096890  314692 pod_ready.go:94] pod "etcd-default-k8s-diff-port-538419" is "Ready"
	I1102 13:36:46.096917  314692 pod_ready.go:86] duration metric: took 5.124767ms for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.099201  314692 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.103426  314692 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-538419" is "Ready"
	I1102 13:36:46.103462  314692 pod_ready.go:86] duration metric: took 4.232396ms for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.105451  314692 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.485622  314692 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-538419" is "Ready"
	I1102 13:36:46.485650  314692 pod_ready.go:86] duration metric: took 380.177878ms for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:46.686552  314692 pod_ready.go:83] waiting for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:47.086155  314692 pod_ready.go:94] pod "kube-proxy-nnhqs" is "Ready"
	I1102 13:36:47.086199  314692 pod_ready.go:86] duration metric: took 399.605914ms for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:47.286319  314692 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:47.686024  314692 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-538419" is "Ready"
	I1102 13:36:47.686071  314692 pod_ready.go:86] duration metric: took 399.721559ms for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:36:47.686087  314692 pod_ready.go:40] duration metric: took 1.604481478s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:47.735366  314692 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:36:47.736792  314692 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-538419" cluster and "default" namespace by default
	I1102 13:36:47.307450  321355 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:36:47.311529  321355 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:36:47.311552  321355 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:36:47.311561  321355 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:36:47.311640  321355 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:36:47.311737  321355 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:36:47.311844  321355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:36:47.320069  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:47.340277  321355 start.go:296] duration metric: took 155.013187ms for postStartSetup
	I1102 13:36:47.340344  321355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:36:47.340381  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:47.358493  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:47.455934  321355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:36:47.460480  321355 fix.go:56] duration metric: took 5.038440875s for fixHost
	I1102 13:36:47.460505  321355 start.go:83] releasing machines lock for "no-preload-978795", held for 5.038484134s
	I1102 13:36:47.460581  321355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-978795
	I1102 13:36:47.480198  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:47.480252  321355 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:47.480261  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:47.480293  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:47.480316  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:47.480350  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:47.480408  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:47.480489  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:47.480539  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:47.500958  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:47.627767  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:47.657270  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:47.676444  321355 ssh_runner.go:195] Run: openssl version
	I1102 13:36:47.683466  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:47.693269  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:47.698081  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:47.698148  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:47.741655  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:47.751379  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:47.764777  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:47.769439  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:47.769484  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:47.807396  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:47.815876  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:47.827956  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:47.834008  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:47.834068  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:47.876109  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:47.885033  321355 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:36:47.888797  321355 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:36:47.893216  321355 ssh_runner.go:195] Run: cat /version.json
	I1102 13:36:47.893324  321355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:36:47.897724  321355 ssh_runner.go:195] Run: systemctl --version
	I1102 13:36:47.965749  321355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:36:48.005255  321355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:36:48.010134  321355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:36:48.010211  321355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:36:48.019053  321355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:36:48.019095  321355 start.go:496] detecting cgroup driver to use...
	I1102 13:36:48.019123  321355 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:36:48.019165  321355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:36:48.034106  321355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:36:48.046820  321355 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:36:48.046877  321355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:36:48.064618  321355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:36:48.078215  321355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:36:48.165090  321355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:36:48.250309  321355 docker.go:234] disabling docker service ...
	I1102 13:36:48.250386  321355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:36:48.268667  321355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:36:48.282053  321355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:36:48.362983  321355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:36:48.446705  321355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:36:48.459871  321355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:36:48.474651  321355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:36:48.474719  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.483593  321355 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:36:48.483658  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.492901  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.502054  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.510945  321355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:36:48.519151  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.527945  321355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.536022  321355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:48.544892  321355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:36:48.552894  321355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:36:48.560517  321355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:48.644427  321355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:36:51.021630  321355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.377162276s)
	I1102 13:36:51.021661  321355 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:36:51.021707  321355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:36:51.026928  321355 start.go:564] Will wait 60s for crictl version
	I1102 13:36:51.026988  321355 ssh_runner.go:195] Run: which crictl
	I1102 13:36:51.030865  321355 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:36:51.057716  321355 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:36:51.057793  321355 ssh_runner.go:195] Run: crio --version
	I1102 13:36:51.092816  321355 ssh_runner.go:195] Run: crio --version
	I1102 13:36:51.126465  321355 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:36:47.147136  324005 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 13:36:47.147335  324005 start.go:159] libmachine.API.Create for "newest-cni-066482" (driver="docker")
	I1102 13:36:47.147361  324005 client.go:173] LocalClient.Create starting
	I1102 13:36:47.147423  324005 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem
	I1102 13:36:47.147452  324005 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:47.147465  324005 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:47.147528  324005 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem
	I1102 13:36:47.147552  324005 main.go:143] libmachine: Decoding PEM data...
	I1102 13:36:47.147581  324005 main.go:143] libmachine: Parsing certificate...
	I1102 13:36:47.147888  324005 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:36:47.166946  324005 cli_runner.go:211] docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:36:47.167002  324005 network_create.go:284] running [docker network inspect newest-cni-066482] to gather additional debugging logs...
	I1102 13:36:47.167020  324005 cli_runner.go:164] Run: docker network inspect newest-cni-066482
	W1102 13:36:47.185291  324005 cli_runner.go:211] docker network inspect newest-cni-066482 returned with exit code 1
	I1102 13:36:47.185345  324005 network_create.go:287] error running [docker network inspect newest-cni-066482]: docker network inspect newest-cni-066482: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-066482 not found
	I1102 13:36:47.185362  324005 network_create.go:289] output of [docker network inspect newest-cni-066482]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-066482 not found
	
	** /stderr **
	I1102 13:36:47.185505  324005 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:47.207369  324005 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
	I1102 13:36:47.208151  324005 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6e64be95e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ec:8c:d9:e4:62} reservation:<nil>}
	I1102 13:36:47.209157  324005 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce0c0e777855 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:03:0f:01:14:50} reservation:<nil>}
	I1102 13:36:47.210242  324005 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df4830}
	I1102 13:36:47.210281  324005 network_create.go:124] attempt to create docker network newest-cni-066482 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 13:36:47.210336  324005 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-066482 newest-cni-066482
	I1102 13:36:47.270845  324005 network_create.go:108] docker network newest-cni-066482 192.168.76.0/24 created
	I1102 13:36:47.270880  324005 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-066482" container
	I1102 13:36:47.270962  324005 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:36:47.289109  324005 cli_runner.go:164] Run: docker volume create newest-cni-066482 --label name.minikube.sigs.k8s.io=newest-cni-066482 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:36:47.307209  324005 oci.go:103] Successfully created a docker volume newest-cni-066482
	I1102 13:36:47.307294  324005 cli_runner.go:164] Run: docker run --rm --name newest-cni-066482-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-066482 --entrypoint /usr/bin/test -v newest-cni-066482:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:36:47.691218  324005 oci.go:107] Successfully prepared a docker volume newest-cni-066482
	I1102 13:36:47.691252  324005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:47.691270  324005 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:36:47.691316  324005 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-066482:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:36:50.981362  324005 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-066482:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.28997853s)
	I1102 13:36:50.981399  324005 kic.go:203] duration metric: took 3.290124654s to extract preloaded images to volume ...
	W1102 13:36:50.981502  324005 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1102 13:36:50.981553  324005 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1102 13:36:50.981614  324005 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:36:51.042709  324005 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-066482 --name newest-cni-066482 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-066482 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-066482 --network newest-cni-066482 --ip 192.168.76.2 --volume newest-cni-066482:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 13:36:51.328803  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Running}}
	I1102 13:36:51.354935  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:36:51.374466  324005 cli_runner.go:164] Run: docker exec newest-cni-066482 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:36:51.431475  324005 oci.go:144] the created container "newest-cni-066482" has a running status.
	I1102 13:36:51.431525  324005 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa...
	I1102 13:36:51.595158  324005 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:36:51.630043  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:36:51.650861  324005 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:36:51.650879  324005 kic_runner.go:114] Args: [docker exec --privileged newest-cni-066482 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:36:51.715213  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:36:51.738377  324005 machine.go:94] provisionDockerMachine start ...
	I1102 13:36:51.738485  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:51.761912  324005 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:51.762287  324005 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1102 13:36:51.762310  324005 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:36:51.906879  324005 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:36:51.906912  324005 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:36:51.906988  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:51.927797  324005 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:51.928115  324005 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1102 13:36:51.928133  324005 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:36:51.127696  321355 cli_runner.go:164] Run: docker network inspect no-preload-978795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:51.146288  321355 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1102 13:36:51.150532  321355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:51.160440  321355 kubeadm.go:884] updating cluster {Name:no-preload-978795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-978795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:36:51.160547  321355 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:51.160616  321355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:51.195647  321355 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:51.195670  321355 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:36:51.195679  321355 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1102 13:36:51.195780  321355 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-978795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-978795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:36:51.195857  321355 ssh_runner.go:195] Run: crio config
	I1102 13:36:51.248008  321355 cni.go:84] Creating CNI manager for ""
	I1102 13:36:51.248040  321355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:51.248053  321355 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:36:51.248074  321355 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-978795 NodeName:no-preload-978795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:36:51.248199  321355 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-978795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:36:51.248256  321355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:36:51.257008  321355 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:36:51.257082  321355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:36:51.265528  321355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:36:51.278732  321355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:36:51.291341  321355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1102 13:36:51.304278  321355 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:36:51.307883  321355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:51.317927  321355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:51.422937  321355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:51.446997  321355 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795 for IP: 192.168.94.2
	I1102 13:36:51.447048  321355 certs.go:195] generating shared ca certs ...
	I1102 13:36:51.447073  321355 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:51.447280  321355 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:36:51.447355  321355 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:36:51.447370  321355 certs.go:257] generating profile certs ...
	I1102 13:36:51.447473  321355 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/client.key
	I1102 13:36:51.447537  321355 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/apiserver.key.522d66ce
	I1102 13:36:51.447607  321355 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/proxy-client.key
	I1102 13:36:51.447854  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:51.447907  321355 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:51.447922  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:51.447961  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:51.447999  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:51.448046  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:51.448107  321355 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:51.448853  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:36:51.471542  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:36:51.507034  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:36:51.538838  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:36:51.572560  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:36:51.601944  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:36:51.626800  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:36:51.647526  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/no-preload-978795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:36:51.674297  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:51.703141  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:51.724555  321355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:51.749241  321355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:36:51.763980  321355 ssh_runner.go:195] Run: openssl version
	I1102 13:36:51.771253  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:51.781645  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:51.786499  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:51.786557  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:51.822701  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:51.831492  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:51.841190  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:51.844873  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:51.844925  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:51.882784  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:51.891067  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:51.900061  321355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:51.904191  321355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:51.904250  321355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:51.947862  321355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:51.956118  321355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:36:51.960783  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:36:52.007615  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:36:52.056109  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:36:52.108041  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:36:52.167587  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:36:52.207970  321355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:36:52.245509  321355 kubeadm.go:401] StartCluster: {Name:no-preload-978795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-978795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:52.245643  321355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:36:52.245727  321355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:36:52.278139  321355 cri.go:89] found id: "d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b"
	I1102 13:36:52.278165  321355 cri.go:89] found id: "34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69"
	I1102 13:36:52.278171  321355 cri.go:89] found id: "42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181"
	I1102 13:36:52.278176  321355 cri.go:89] found id: "05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48"
	I1102 13:36:52.278180  321355 cri.go:89] found id: ""
	I1102 13:36:52.278225  321355 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:36:52.291800  321355 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:36:52Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:36:52.291883  321355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:36:52.300936  321355 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:36:52.300959  321355 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:36:52.300999  321355 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:36:52.308743  321355 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:36:52.309931  321355 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-978795" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:52.310803  321355 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-978795" cluster setting kubeconfig missing "no-preload-978795" context setting]
	I1102 13:36:52.312060  321355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:52.314197  321355 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:36:52.322942  321355 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1102 13:36:52.322975  321355 kubeadm.go:602] duration metric: took 22.010186ms to restartPrimaryControlPlane
	I1102 13:36:52.322987  321355 kubeadm.go:403] duration metric: took 77.491839ms to StartCluster
	I1102 13:36:52.323005  321355 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:52.323073  321355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:36:52.324867  321355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:52.325142  321355 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:36:52.325386  321355 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:52.325370  321355 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:36:52.325494  321355 addons.go:70] Setting storage-provisioner=true in profile "no-preload-978795"
	I1102 13:36:52.325500  321355 addons.go:70] Setting dashboard=true in profile "no-preload-978795"
	I1102 13:36:52.325523  321355 addons.go:239] Setting addon storage-provisioner=true in "no-preload-978795"
	I1102 13:36:52.325518  321355 addons.go:239] Setting addon dashboard=true in "no-preload-978795"
	W1102 13:36:52.325533  321355 addons.go:248] addon storage-provisioner should already be in state true
	W1102 13:36:52.325534  321355 addons.go:248] addon dashboard should already be in state true
	I1102 13:36:52.325558  321355 host.go:66] Checking if "no-preload-978795" exists ...
	I1102 13:36:52.325558  321355 host.go:66] Checking if "no-preload-978795" exists ...
	I1102 13:36:52.325650  321355 addons.go:70] Setting default-storageclass=true in profile "no-preload-978795"
	I1102 13:36:52.325689  321355 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-978795"
	I1102 13:36:52.326024  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:52.326079  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:52.326080  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:52.327525  321355 out.go:179] * Verifying Kubernetes components...
	I1102 13:36:52.329018  321355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:52.353645  321355 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:36:52.354718  321355 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:36:52.354784  321355 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:52.354800  321355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:36:52.354854  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:52.355073  321355 addons.go:239] Setting addon default-storageclass=true in "no-preload-978795"
	W1102 13:36:52.355096  321355 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:36:52.355127  321355 host.go:66] Checking if "no-preload-978795" exists ...
	I1102 13:36:52.355588  321355 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:36:52.357339  321355 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:36:52.091025  324005 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:36:52.091101  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:52.116558  324005 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:52.116866  324005 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1102 13:36:52.116905  324005 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:36:52.270778  324005 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:36:52.270806  324005 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:36:52.271004  324005 ubuntu.go:190] setting up certificates
	I1102 13:36:52.271017  324005 provision.go:84] configureAuth start
	I1102 13:36:52.271079  324005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:36:52.292876  324005 provision.go:143] copyHostCerts
	I1102 13:36:52.292939  324005 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:36:52.292952  324005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:36:52.293033  324005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:36:52.293169  324005 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:36:52.293183  324005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:36:52.293223  324005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:36:52.293332  324005 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:36:52.293344  324005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:36:52.293382  324005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:36:52.293476  324005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:36:52.308048  324005 provision.go:177] copyRemoteCerts
	I1102 13:36:52.308124  324005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:36:52.308165  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:52.331204  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:36:52.447684  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:36:52.470284  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:36:52.488731  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:36:52.510439  324005 provision.go:87] duration metric: took 239.407505ms to configureAuth
	I1102 13:36:52.510484  324005 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:36:52.510727  324005 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:36:52.510880  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:52.537977  324005 main.go:143] libmachine: Using SSH client type: native
	I1102 13:36:52.538458  324005 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33120 <nil> <nil>}
	I1102 13:36:52.538494  324005 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:36:52.811792  324005 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:36:52.811817  324005 machine.go:97] duration metric: took 1.07341737s to provisionDockerMachine
	I1102 13:36:52.811829  324005 client.go:176] duration metric: took 5.664462759s to LocalClient.Create
	I1102 13:36:52.811849  324005 start.go:167] duration metric: took 5.664514106s to libmachine.API.Create "newest-cni-066482"
	I1102 13:36:52.811858  324005 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:36:52.811871  324005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:36:52.811933  324005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:36:52.811981  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:52.831122  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:36:52.933816  324005 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:36:52.937808  324005 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:36:52.937836  324005 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:36:52.937848  324005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:36:52.937900  324005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:36:52.938017  324005 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:36:52.938143  324005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:36:52.946502  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:52.967007  324005 start.go:296] duration metric: took 155.131591ms for postStartSetup
	I1102 13:36:52.967400  324005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:36:52.986518  324005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:36:52.986816  324005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:36:52.986881  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:53.005640  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:36:53.102872  324005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:36:53.107290  324005 start.go:128] duration metric: took 5.964373561s to createHost
	I1102 13:36:53.107315  324005 start.go:83] releasing machines lock for "newest-cni-066482", held for 5.964496846s
	I1102 13:36:53.107392  324005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:36:53.124956  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:53.125004  324005 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:53.125015  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:53.125044  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:53.125076  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:53.125110  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:53.125169  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:53.125267  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:53.125331  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:36:53.142897  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:36:53.256627  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:53.274501  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:53.291612  324005 ssh_runner.go:195] Run: openssl version
	I1102 13:36:53.297782  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:53.305949  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:53.309727  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:53.309790  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:53.343289  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:53.351944  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:53.360076  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:53.363617  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:53.363664  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:53.397595  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:53.406385  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:53.414828  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:53.418679  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:53.418745  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:53.452917  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:53.461690  324005 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:36:53.465295  324005 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:36:53.468883  324005 ssh_runner.go:195] Run: cat /version.json
	I1102 13:36:53.468994  324005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:36:53.472560  324005 ssh_runner.go:195] Run: systemctl --version
	I1102 13:36:53.528146  324005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:36:53.568339  324005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:36:53.573924  324005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:36:53.573991  324005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:36:53.602364  324005 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1102 13:36:53.602391  324005 start.go:496] detecting cgroup driver to use...
	I1102 13:36:53.602427  324005 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:36:53.602474  324005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:36:53.622558  324005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:36:53.637297  324005 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:36:53.637368  324005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:36:53.656822  324005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:36:53.677630  324005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:36:53.778590  324005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:36:53.880534  324005 docker.go:234] disabling docker service ...
	I1102 13:36:53.880628  324005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:36:53.901336  324005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:36:53.916426  324005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:36:54.011177  324005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:36:54.102168  324005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:36:54.115369  324005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:36:54.129724  324005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:36:54.130073  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.144060  324005 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:36:54.144133  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.154884  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.168169  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.177315  324005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:36:54.185627  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.194404  324005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.207962  324005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:36:54.217048  324005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:36:54.224345  324005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:36:54.231872  324005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:54.319029  324005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:36:54.455036  324005 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:36:54.455112  324005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:36:54.464037  324005 start.go:564] Will wait 60s for crictl version
	I1102 13:36:54.464101  324005 ssh_runner.go:195] Run: which crictl
	I1102 13:36:54.470426  324005 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:36:54.518475  324005 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:36:54.518574  324005 ssh_runner.go:195] Run: crio --version
	I1102 13:36:54.567800  324005 ssh_runner.go:195] Run: crio --version
	I1102 13:36:54.615628  324005 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:36:54.617140  324005 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:36:54.640141  324005 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:36:54.645114  324005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:54.661103  324005 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1102 13:36:52.358424  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:36:52.358443  321355 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:36:52.358492  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:52.389737  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:52.390383  321355 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:52.390406  321355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:36:52.390463  321355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:36:52.390695  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:52.414087  321355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:36:52.499586  321355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:52.513111  321355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:36:52.516937  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:36:52.516964  321355 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:36:52.520050  321355 node_ready.go:35] waiting up to 6m0s for node "no-preload-978795" to be "Ready" ...
	I1102 13:36:52.531292  321355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:36:52.539140  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:36:52.539166  321355 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:36:52.557133  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:36:52.557159  321355 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:36:52.576345  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:36:52.576367  321355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:36:52.598054  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:36:52.598081  321355 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:36:52.617706  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:36:52.617735  321355 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:36:52.631205  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:36:52.631236  321355 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:36:52.644148  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:36:52.644175  321355 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:36:52.657559  321355 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:36:52.657609  321355 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:36:52.671108  321355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:36:54.474626  321355 node_ready.go:49] node "no-preload-978795" is "Ready"
	I1102 13:36:54.474663  321355 node_ready.go:38] duration metric: took 1.954577517s for node "no-preload-978795" to be "Ready" ...
	I1102 13:36:54.474680  321355 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:36:54.474736  321355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:36:55.133364  321355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.620217937s)
	I1102 13:36:55.133444  321355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.602118046s)
	I1102 13:36:55.133534  321355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.462388025s)
	I1102 13:36:55.133586  321355 api_server.go:72] duration metric: took 2.80841022s to wait for apiserver process to appear ...
	I1102 13:36:55.133605  321355 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:36:55.133679  321355 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:36:55.135204  321355 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-978795 addons enable metrics-server
	
	I1102 13:36:55.138083  321355 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:36:55.138105  321355 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:36:55.140493  321355 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:36:54.662426  324005 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:36:54.662575  324005 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:36:54.662645  324005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:54.700328  324005 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:54.700349  324005 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:36:54.700401  324005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:36:54.728661  324005 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:36:54.728685  324005 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:36:54.728694  324005 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:36:54.728806  324005 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:36:54.728888  324005 ssh_runner.go:195] Run: crio config
	I1102 13:36:54.783485  324005 cni.go:84] Creating CNI manager for ""
	I1102 13:36:54.783505  324005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:36:54.783515  324005 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:36:54.783537  324005 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:36:54.783697  324005 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:36:54.783758  324005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:36:54.792307  324005 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:36:54.792373  324005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:36:54.800162  324005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:36:54.813677  324005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:36:54.830808  324005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:36:54.845286  324005 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:36:54.852350  324005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:36:54.866054  324005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:36:54.993938  324005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:36:55.024316  324005 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:36:55.024343  324005 certs.go:195] generating shared ca certs ...
	I1102 13:36:55.024365  324005 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.024514  324005 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:36:55.024575  324005 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:36:55.024591  324005 certs.go:257] generating profile certs ...
	I1102 13:36:55.024654  324005 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:36:55.024669  324005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.crt with IP's: []
	I1102 13:36:55.166082  324005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.crt ...
	I1102 13:36:55.166108  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.crt: {Name:mk641beaeed7a6d9ae23b4977891db93c118fde3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.166255  324005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key ...
	I1102 13:36:55.166266  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key: {Name:mk16043e1a401b460b3f6954dbebfc153e93e840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.166369  324005 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:36:55.166386  324005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt.c4504c8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 13:36:55.562018  324005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt.c4504c8b ...
	I1102 13:36:55.562053  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt.c4504c8b: {Name:mk898670f0d8e87f1eac58c36449f2434c862943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.562256  324005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b ...
	I1102 13:36:55.562274  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b: {Name:mk9183a32996b99a7aeecc292ae5256be2994998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.562398  324005 certs.go:382] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt.c4504c8b -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt
	I1102 13:36:55.562500  324005 certs.go:386] copying /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b -> /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key
	I1102 13:36:55.562599  324005 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:36:55.562629  324005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt with IP's: []
	I1102 13:36:55.996372  324005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt ...
	I1102 13:36:55.996399  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt: {Name:mk239be25e747b4dad5afc1c461b488729f60d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.996555  324005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key ...
	I1102 13:36:55.996583  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key: {Name:mkbd8d7249e06c97e6af4a7249cc19c93d8613a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:36:55.996762  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:36:55.996798  324005 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:36:55.996812  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:36:55.996832  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:36:55.996852  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:36:55.996873  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:36:55.996948  324005 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:36:55.997479  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:36:56.015713  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:36:56.033281  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:36:56.050719  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:36:56.067742  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:36:56.084768  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:36:56.101488  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:36:56.118002  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:36:56.134543  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:36:56.152956  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:36:56.172345  324005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:36:56.191759  324005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:36:56.206897  324005 ssh_runner.go:195] Run: openssl version
	I1102 13:36:56.214199  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:36:56.224449  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:36:56.229118  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:36:56.229179  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:36:56.274182  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:36:56.283620  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:36:56.292955  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:56.296911  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:56.296965  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:36:56.339942  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:36:56.348140  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:36:56.356862  324005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:36:56.360380  324005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:36:56.360424  324005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:36:56.394690  324005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:36:56.402919  324005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:36:56.406409  324005 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:36:56.406469  324005 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:36:56.406529  324005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:36:56.406599  324005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:36:56.433969  324005 cri.go:89] found id: ""
	I1102 13:36:56.434046  324005 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:36:56.441971  324005 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:36:56.449665  324005 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:36:56.449721  324005 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:36:56.457208  324005 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:36:56.457223  324005 kubeadm.go:158] found existing configuration files:
	
	I1102 13:36:56.457262  324005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 13:36:56.464308  324005 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:36:56.464349  324005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:36:56.471413  324005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 13:36:56.478732  324005 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:36:56.478774  324005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:36:56.485886  324005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 13:36:56.493374  324005 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:36:56.493412  324005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:36:56.500533  324005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 13:36:56.509852  324005 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:36:56.509908  324005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:36:56.518821  324005 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:36:56.569715  324005 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:36:56.569792  324005 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:36:56.592737  324005 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:36:56.592825  324005 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1102 13:36:56.592897  324005 kubeadm.go:319] OS: Linux
	I1102 13:36:56.592982  324005 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:36:56.593044  324005 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:36:56.593103  324005 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:36:56.593146  324005 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:36:56.593233  324005 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:36:56.593327  324005 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:36:56.593407  324005 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:36:56.593480  324005 kubeadm.go:319] CGROUPS_IO: enabled
	I1102 13:36:56.654028  324005 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:36:56.654195  324005 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:36:56.654340  324005 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:36:56.662819  324005 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:36:56.666417  324005 out.go:252]   - Generating certificates and keys ...
	I1102 13:36:56.666486  324005 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:36:56.666540  324005 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:36:55.141440  321355 addons.go:515] duration metric: took 2.816234168s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:36:55.633884  321355 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:36:55.639956  321355 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:36:55.639986  321355 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:36:56.134629  321355 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1102 13:36:56.138675  321355 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1102 13:36:56.139754  321355 api_server.go:141] control plane version: v1.34.1
	I1102 13:36:56.139797  321355 api_server.go:131] duration metric: took 1.006130664s to wait for apiserver health ...
	I1102 13:36:56.139809  321355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:36:56.143726  321355 system_pods.go:59] 8 kube-system pods found
	I1102 13:36:56.143755  321355 system_pods.go:61] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:56.143764  321355 system_pods.go:61] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:36:56.143770  321355 system_pods.go:61] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:56.143776  321355 system_pods.go:61] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:36:56.143784  321355 system_pods.go:61] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:36:56.143789  321355 system_pods.go:61] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:56.143794  321355 system_pods.go:61] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:36:56.143797  321355 system_pods.go:61] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Running
	I1102 13:36:56.143802  321355 system_pods.go:74] duration metric: took 3.983767ms to wait for pod list to return data ...
	I1102 13:36:56.143811  321355 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:36:56.146303  321355 default_sa.go:45] found service account: "default"
	I1102 13:36:56.146323  321355 default_sa.go:55] duration metric: took 2.505051ms for default service account to be created ...
	I1102 13:36:56.146331  321355 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:36:56.148741  321355 system_pods.go:86] 8 kube-system pods found
	I1102 13:36:56.148764  321355 system_pods.go:89] "coredns-66bc5c9577-2dtpc" [8533e5ca-78ef-4401-b967-018eceeb5321] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:36:56.148771  321355 system_pods.go:89] "etcd-no-preload-978795" [3accbd43-641b-4243-9b9f-d7b40c27d25a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:36:56.148776  321355 system_pods.go:89] "kindnet-d8n4x" [c337ae93-812a-455b-bfe4-cdf49864936f] Running
	I1102 13:36:56.148814  321355 system_pods.go:89] "kube-apiserver-no-preload-978795" [aca34947-60ef-4b9f-a159-b323fd9c325e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:36:56.148824  321355 system_pods.go:89] "kube-controller-manager-no-preload-978795" [995b65a0-2705-4e33-a002-44f3db50a736] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:36:56.148829  321355 system_pods.go:89] "kube-proxy-rmkmd" [98f26f5f-cb23-4052-a93d-328210c54a54] Running
	I1102 13:36:56.148835  321355 system_pods.go:89] "kube-scheduler-no-preload-978795" [f2b2b91b-09ee-414a-9675-eafad041fcfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:36:56.148838  321355 system_pods.go:89] "storage-provisioner" [0d0ae727-75eb-4ea5-b0b8-f044d6b80bb1] Running
	I1102 13:36:56.148844  321355 system_pods.go:126] duration metric: took 2.508583ms to wait for k8s-apps to be running ...
	I1102 13:36:56.148852  321355 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:36:56.148885  321355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:36:56.162269  321355 system_svc.go:56] duration metric: took 13.407583ms WaitForService to wait for kubelet
	I1102 13:36:56.162296  321355 kubeadm.go:587] duration metric: took 3.837122691s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:36:56.162328  321355 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:36:56.165761  321355 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:36:56.165784  321355 node_conditions.go:123] node cpu capacity is 8
	I1102 13:36:56.165795  321355 node_conditions.go:105] duration metric: took 3.462119ms to run NodePressure ...
	I1102 13:36:56.165807  321355 start.go:242] waiting for startup goroutines ...
	I1102 13:36:56.165813  321355 start.go:247] waiting for cluster config update ...
	I1102 13:36:56.165823  321355 start.go:256] writing updated cluster config ...
	I1102 13:36:56.166121  321355 ssh_runner.go:195] Run: rm -f paused
	I1102 13:36:56.170442  321355 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:36:56.242988  321355 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 02 13:36:44 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:44.58402305Z" level=info msg="Starting container: d3607512d190befefc682123a9fc73d13d595027d934ba01d113bbedd06506ea" id=7f38e62b-fc95-46af-80ec-1f8b5f1db7bd name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:44 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:44.585815161Z" level=info msg="Started container" PID=1891 containerID=d3607512d190befefc682123a9fc73d13d595027d934ba01d113bbedd06506ea description=kube-system/coredns-66bc5c9577-4xsxx/coredns id=7f38e62b-fc95-46af-80ec-1f8b5f1db7bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e6afcfd8f78132136f65413af7b57c96ddd18502e8126cbadd6835ecbf481b0
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.231453594Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8eb66a36-98a1-4153-9f43-c969ba3c1487 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.231545834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.236291719Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:637d1f9df851ce42f6970a45468f18a2ec0f265ca72f9c77db45324fea5d1b59 UID:4c549d03-4904-4a3b-b321-820059b96c9e NetNS:/var/run/netns/5c72f032-c007-4c3d-87e5-83df3bda313b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005bcaf8}] Aliases:map[]}"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.236332585Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.247623894Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:637d1f9df851ce42f6970a45468f18a2ec0f265ca72f9c77db45324fea5d1b59 UID:4c549d03-4904-4a3b-b321-820059b96c9e NetNS:/var/run/netns/5c72f032-c007-4c3d-87e5-83df3bda313b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005bcaf8}] Aliases:map[]}"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.247745076Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.248527052Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.249681422Z" level=info msg="Ran pod sandbox 637d1f9df851ce42f6970a45468f18a2ec0f265ca72f9c77db45324fea5d1b59 with infra container: default/busybox/POD" id=8eb66a36-98a1-4153-9f43-c969ba3c1487 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.251006571Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8baaba9d-1421-495f-ab14-c5bc34f4ebb4 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.251180726Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8baaba9d-1421-495f-ab14-c5bc34f4ebb4 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.251240817Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8baaba9d-1421-495f-ab14-c5bc34f4ebb4 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.252104022Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56ef456b-c4da-452a-a94c-a9d5a2ca3e77 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:48 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:48.253790171Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.92675661Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=56ef456b-c4da-452a-a94c-a9d5a2ca3e77 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.927672518Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77f7ee1c-4005-4615-a1b8-641e1cb80235 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.929114874Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4fdbc736-6836-45d6-aabf-bd20abf6aeec name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.932971725Z" level=info msg="Creating container: default/busybox/busybox" id=4eebb4b0-4fb2-4178-9e4f-21876cef358d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.933119829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.937826463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.938334325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.975136006Z" level=info msg="Created container adf447e7128298235aa07875151aacd4ed78cfa96f5c611ffa54af531977cf44: default/busybox/busybox" id=4eebb4b0-4fb2-4178-9e4f-21876cef358d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.975916487Z" level=info msg="Starting container: adf447e7128298235aa07875151aacd4ed78cfa96f5c611ffa54af531977cf44" id=3b14c088-f75f-42f9-bc91-ba5f575583ee name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:36:50 default-k8s-diff-port-538419 crio[809]: time="2025-11-02T13:36:50.978184435Z" level=info msg="Started container" PID=1964 containerID=adf447e7128298235aa07875151aacd4ed78cfa96f5c611ffa54af531977cf44 description=default/busybox/busybox id=3b14c088-f75f-42f9-bc91-ba5f575583ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=637d1f9df851ce42f6970a45468f18a2ec0f265ca72f9c77db45324fea5d1b59
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	adf447e712829       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   637d1f9df851c       busybox                                                default
	d3607512d190b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   1e6afcfd8f781       coredns-66bc5c9577-4xsxx                               kube-system
	bb8d80688c71f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   7e850bbfc75e3       storage-provisioner                                    kube-system
	06995f0e55a5b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   c435e2f8cfa2c       kube-proxy-nnhqs                                       kube-system
	1348fb135e41e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      26 seconds ago      Running             kindnet-cni               0                   1293c88d13640       kindnet-gc6n2                                          kube-system
	05c8f425af06d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   132f08de0fa35       etcd-default-k8s-diff-port-538419                      kube-system
	c147f92d868de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   4b95d0745e969       kube-scheduler-default-k8s-diff-port-538419            kube-system
	301a80fd8f9ff       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   81d1c1443a09b       kube-apiserver-default-k8s-diff-port-538419            kube-system
	5d1cd600655d9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   4cc745465bcfb       kube-controller-manager-default-k8s-diff-port-538419   kube-system
	
	
	==> coredns [d3607512d190befefc682123a9fc73d13d595027d934ba01d113bbedd06506ea] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37341 - 45432 "HINFO IN 6637538021578222507.7192598892889977884. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020115317s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-538419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-538419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538419
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:36:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:36:58 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:36:58 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:36:58 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:36:58 +0000   Sun, 02 Nov 2025 13:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-538419
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a8e8c9a3-24d1-4403-8143-5254b74d1185
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-4xsxx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-538419                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-gc6n2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538419             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538419    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-nnhqs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538419             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node default-k8s-diff-port-538419 event: Registered Node default-k8s-diff-port-538419 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-538419 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [05c8f425af06dfb9d98c0bddd0a107f66d2f653bc43cd712cf530bd4f702058b] <==
	{"level":"warn","ts":"2025-11-02T13:36:24.292355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.302799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.309211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.316250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.322740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.328984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.335219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.347220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.354105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.360615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.368542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.375217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.382508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.390129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.397181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.404452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.410590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.417833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.425112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.432749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.440354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.453801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.460603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.468253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:24.522643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:36:59 up  1:19,  0 user,  load average: 4.48, 4.12, 2.66
	Linux default-k8s-diff-port-538419 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1348fb135e41ed02a88750d3917e7f1b05295b3812eb16c072fd0d94ece013c8] <==
	I1102 13:36:33.632127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:36:33.632442       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 13:36:33.632641       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:36:33.632664       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:36:33.632689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:36:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:36:33.992592       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:36:33.992655       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:36:33.992671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:36:33.992827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:36:34.229586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:36:34.229617       1 metrics.go:72] Registering metrics
	I1102 13:36:34.229722       1 controller.go:711] "Syncing nftables rules"
	I1102 13:36:43.829920       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:36:43.829980       1 main.go:301] handling current node
	I1102 13:36:53.829904       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:36:53.829961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [301a80fd8f9ffab230e2e6d1d934698bda1b3b71fc37687ebceffd3602586b14] <==
	I1102 13:36:25.031345       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:36:25.033107       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:36:25.035595       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1102 13:36:25.036177       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1102 13:36:25.045682       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:36:25.078748       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:25.240497       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:36:25.938191       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:36:25.944857       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:36:25.944885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:36:26.409163       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:36:26.442306       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:36:26.538636       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:36:26.544176       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1102 13:36:26.545104       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:36:26.548612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:26.981527       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:36:27.689428       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:36:27.699345       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:36:27.705552       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:36:32.683449       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:36:32.838366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:32.841949       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:32.981932       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1102 13:36:58.011441       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:57178: use of closed network connection
	
	
	==> kube-controller-manager [5d1cd600655d98b2dbeff8d5c62e1bf9d482f8a969158318b7aae30080f44277] <==
	I1102 13:36:31.980422       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:36:31.980422       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:36:31.980437       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:36:31.980546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:36:31.980593       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:36:31.980653       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:36:31.980679       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:36:31.980959       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:36:31.981745       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:36:31.981821       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 13:36:31.981829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:36:31.981943       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:36:31.984748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:36:31.984764       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 13:36:31.984825       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 13:36:31.984860       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 13:36:31.984868       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 13:36:31.984872       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 13:36:31.988125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:31.988166       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:31.990262       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:36:31.990392       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-538419" podCIDRs=["10.244.0.0/24"]
	I1102 13:36:31.995526       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:36:32.001931       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:46.939165       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [06995f0e55a5b9e5f6a96fa395d184b79bfd07c2ad06ec479c36d3dd17ac4dfd] <==
	I1102 13:36:33.451595       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:36:33.521495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:36:33.622629       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:36:33.622662       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 13:36:33.622725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:36:33.651060       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:36:33.651135       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:36:33.662059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:36:33.662401       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:36:33.662440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:33.664154       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:36:33.664173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:36:33.664183       1 config.go:309] "Starting node config controller"
	I1102 13:36:33.664196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:36:33.664203       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:36:33.664214       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:36:33.664222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:36:33.664235       1 config.go:200] "Starting service config controller"
	I1102 13:36:33.664276       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:36:33.764355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:36:33.764379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:36:33.764404       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c147f92d868de0954b0609c93b61b0a0de608579602be9f91f022037c582c3b4] <==
	E1102 13:36:24.999343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:36:24.999434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:36:24.999703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:36:24.999772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:36:25.000015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:36:25.000141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:36:25.000152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:36:25.000180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 13:36:25.000299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:36:25.000319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:36:25.000410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:36:25.000747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:36:25.001269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:36:25.821368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:36:25.841791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 13:36:25.855128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:36:25.879311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:36:25.884519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:36:25.968736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:36:26.005110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:36:26.010442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:36:26.108112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:36:26.238924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:36:26.243838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1102 13:36:29.295309       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:36:28 default-k8s-diff-port-538419 kubelet[1360]: E1102 13:36:28.539242    1360 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-538419\" already exists" pod="kube-system/etcd-default-k8s-diff-port-538419"
	Nov 02 13:36:28 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:28.583068    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-538419" podStartSLOduration=1.583043269 podStartE2EDuration="1.583043269s" podCreationTimestamp="2025-11-02 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:28.572613723 +0000 UTC m=+1.140895573" watchObservedRunningTime="2025-11-02 13:36:28.583043269 +0000 UTC m=+1.151325117"
	Nov 02 13:36:28 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:28.592715    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-538419" podStartSLOduration=1.59269557 podStartE2EDuration="1.59269557s" podCreationTimestamp="2025-11-02 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:28.592693851 +0000 UTC m=+1.160975700" watchObservedRunningTime="2025-11-02 13:36:28.59269557 +0000 UTC m=+1.160977420"
	Nov 02 13:36:28 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:28.592818    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-538419" podStartSLOduration=1.592810902 podStartE2EDuration="1.592810902s" podCreationTimestamp="2025-11-02 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:28.58352695 +0000 UTC m=+1.151808800" watchObservedRunningTime="2025-11-02 13:36:28.592810902 +0000 UTC m=+1.161092750"
	Nov 02 13:36:28 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:28.604856    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-538419" podStartSLOduration=1.604832397 podStartE2EDuration="1.604832397s" podCreationTimestamp="2025-11-02 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:28.604226685 +0000 UTC m=+1.172508530" watchObservedRunningTime="2025-11-02 13:36:28.604832397 +0000 UTC m=+1.173114246"
	Nov 02 13:36:32 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:32.029673    1360 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 13:36:32 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:32.030353    1360 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040276    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/51ce1d18-d59b-408c-b247-1f51a7f81bb0-cni-cfg\") pod \"kindnet-gc6n2\" (UID: \"51ce1d18-d59b-408c-b247-1f51a7f81bb0\") " pod="kube-system/kindnet-gc6n2"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040318    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8kd\" (UniqueName: \"kubernetes.io/projected/51ce1d18-d59b-408c-b247-1f51a7f81bb0-kube-api-access-fr8kd\") pod \"kindnet-gc6n2\" (UID: \"51ce1d18-d59b-408c-b247-1f51a7f81bb0\") " pod="kube-system/kindnet-gc6n2"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040353    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df597ea0-03ac-465d-84e3-2ddca37151d2-xtables-lock\") pod \"kube-proxy-nnhqs\" (UID: \"df597ea0-03ac-465d-84e3-2ddca37151d2\") " pod="kube-system/kube-proxy-nnhqs"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040372    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51ce1d18-d59b-408c-b247-1f51a7f81bb0-lib-modules\") pod \"kindnet-gc6n2\" (UID: \"51ce1d18-d59b-408c-b247-1f51a7f81bb0\") " pod="kube-system/kindnet-gc6n2"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040399    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df597ea0-03ac-465d-84e3-2ddca37151d2-kube-proxy\") pod \"kube-proxy-nnhqs\" (UID: \"df597ea0-03ac-465d-84e3-2ddca37151d2\") " pod="kube-system/kube-proxy-nnhqs"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040412    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df597ea0-03ac-465d-84e3-2ddca37151d2-lib-modules\") pod \"kube-proxy-nnhqs\" (UID: \"df597ea0-03ac-465d-84e3-2ddca37151d2\") " pod="kube-system/kube-proxy-nnhqs"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040501    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51ce1d18-d59b-408c-b247-1f51a7f81bb0-xtables-lock\") pod \"kindnet-gc6n2\" (UID: \"51ce1d18-d59b-408c-b247-1f51a7f81bb0\") " pod="kube-system/kindnet-gc6n2"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.040559    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8brqw\" (UniqueName: \"kubernetes.io/projected/df597ea0-03ac-465d-84e3-2ddca37151d2-kube-api-access-8brqw\") pod \"kube-proxy-nnhqs\" (UID: \"df597ea0-03ac-465d-84e3-2ddca37151d2\") " pod="kube-system/kube-proxy-nnhqs"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.555111    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nnhqs" podStartSLOduration=1.5550899 podStartE2EDuration="1.5550899s" podCreationTimestamp="2025-11-02 13:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:33.554883791 +0000 UTC m=+6.123165643" watchObservedRunningTime="2025-11-02 13:36:33.5550899 +0000 UTC m=+6.123371752"
	Nov 02 13:36:33 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:33.564636    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gc6n2" podStartSLOduration=1.564614347 podStartE2EDuration="1.564614347s" podCreationTimestamp="2025-11-02 13:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:33.564402681 +0000 UTC m=+6.132684530" watchObservedRunningTime="2025-11-02 13:36:33.564614347 +0000 UTC m=+6.132896198"
	Nov 02 13:36:44 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:44.201177    1360 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 13:36:44 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:44.324137    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d1e97a-38e0-47b8-a6c4-4615003a5618-config-volume\") pod \"coredns-66bc5c9577-4xsxx\" (UID: \"89d1e97a-38e0-47b8-a6c4-4615003a5618\") " pod="kube-system/coredns-66bc5c9577-4xsxx"
	Nov 02 13:36:44 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:44.324192    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/743c59db-77d8-44d3-85b6-fa5d0e288d93-tmp\") pod \"storage-provisioner\" (UID: \"743c59db-77d8-44d3-85b6-fa5d0e288d93\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:44 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:44.324226    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzj96\" (UniqueName: \"kubernetes.io/projected/89d1e97a-38e0-47b8-a6c4-4615003a5618-kube-api-access-dzj96\") pod \"coredns-66bc5c9577-4xsxx\" (UID: \"89d1e97a-38e0-47b8-a6c4-4615003a5618\") " pod="kube-system/coredns-66bc5c9577-4xsxx"
	Nov 02 13:36:44 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:44.324248    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2hr\" (UniqueName: \"kubernetes.io/projected/743c59db-77d8-44d3-85b6-fa5d0e288d93-kube-api-access-wm2hr\") pod \"storage-provisioner\" (UID: \"743c59db-77d8-44d3-85b6-fa5d0e288d93\") " pod="kube-system/storage-provisioner"
	Nov 02 13:36:45 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:45.585370    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.585347631 podStartE2EDuration="12.585347631s" podCreationTimestamp="2025-11-02 13:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:45.585348746 +0000 UTC m=+18.153630596" watchObservedRunningTime="2025-11-02 13:36:45.585347631 +0000 UTC m=+18.153629481"
	Nov 02 13:36:45 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:45.596284    1360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4xsxx" podStartSLOduration=12.596262642 podStartE2EDuration="12.596262642s" podCreationTimestamp="2025-11-02 13:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:36:45.596199333 +0000 UTC m=+18.164481183" watchObservedRunningTime="2025-11-02 13:36:45.596262642 +0000 UTC m=+18.164544492"
	Nov 02 13:36:47 default-k8s-diff-port-538419 kubelet[1360]: I1102 13:36:47.946716    1360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qdr\" (UniqueName: \"kubernetes.io/projected/4c549d03-4904-4a3b-b321-820059b96c9e-kube-api-access-l4qdr\") pod \"busybox\" (UID: \"4c549d03-4904-4a3b-b321-820059b96c9e\") " pod="default/busybox"
	
	
	==> storage-provisioner [bb8d80688c71f3ba6cdd6f601b0cefd981e2fb684fee992168bbaefb49b57a8b] <==
	I1102 13:36:44.588072       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:36:44.597586       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:36:44.597699       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:36:44.600032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:44.606230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:44.606484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:36:44.606660       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_2b07608a-c3da-4ac3-8493-f201bf335014!
	I1102 13:36:44.606643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"030faaca-fc27-4b34-be7e-e6cc7b667e6a", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538419_2b07608a-c3da-4ac3-8493-f201bf335014 became leader
	W1102 13:36:44.608553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:44.612176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:36:44.707007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_2b07608a-c3da-4ac3-8493-f201bf335014!
	W1102 13:36:46.615500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:46.619832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:48.623434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:48.627672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:50.630728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:50.666279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:52.669436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:52.673556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:54.677779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:54.683858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:56.687357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:56.690994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:58.694728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:36:58.698777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (248.767331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-066482
helpers_test.go:243: (dbg) docker inspect newest-cni-066482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	        "Created": "2025-11-02T13:36:51.061365338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325110,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:36:51.100263333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hosts",
	        "LogPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3-json.log",
	        "Name": "/newest-cni-066482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-066482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-066482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	                "LowerDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-066482",
	                "Source": "/var/lib/docker/volumes/newest-cni-066482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-066482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-066482",
	                "name.minikube.sigs.k8s.io": "newest-cni-066482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b36c79bf342aca1b5e26d2009428cfd26d2e36bcdde514b5aab86930cce6972f",
	            "SandboxKey": "/var/run/docker/netns/b36c79bf342a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-066482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:b8:5b:b4:bd:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb7d140d6f6ce408658294ace6543d63c65a2cd98673247fb739d0124deecb8e",
	                    "EndpointID": "ef7ba8d51526357f3623680cbe5a5fa6c7487c769117a17c9bfb4f82e68ceeb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-066482",
	                        "2ae7f574b714"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25: (1.129011365s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-123357 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ ssh     │ -p bridge-123357 sudo crio config                                                                                                                                                                                                             │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ delete  │ -p bridge-123357                                                                                                                                                                                                                              │ bridge-123357                │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:35 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:35 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p kubernetes-upgrade-273161                                                                                                                                                                                                                  │ kubernetes-upgrade-273161    │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                                                                                               │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:02.830877  328990 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:02.830994  328990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:02.831005  328990 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:02.831011  328990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:02.831360  328990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:02.831919  328990 out.go:368] Setting JSON to false
	I1102 13:37:02.833473  328990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4775,"bootTime":1762085848,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:02.833596  328990 start.go:143] virtualization: kvm guest
	I1102 13:37:02.835561  328990 out.go:179] * [embed-certs-748183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:02.836776  328990 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:02.836802  328990 notify.go:221] Checking for updates...
	I1102 13:37:02.838998  328990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:02.840052  328990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:02.841169  328990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:02.842297  328990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:02.843406  328990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:02.845016  328990 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:02.845722  328990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:02.876921  328990 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:02.877026  328990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:02.944400  328990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-02 13:37:02.931787038 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:02.944536  328990 docker.go:319] overlay module found
	I1102 13:37:02.949997  328990 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:02.951146  328990 start.go:309] selected driver: docker
	I1102 13:37:02.951164  328990 start.go:930] validating driver "docker" against &{Name:embed-certs-748183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-748183 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:02.951270  328990 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:02.952028  328990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:03.021012  328990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-02 13:37:03.008878136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:03.021440  328990 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:03.021485  328990 cni.go:84] Creating CNI manager for ""
	I1102 13:37:03.021576  328990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:03.021633  328990 start.go:353] cluster config:
	{Name:embed-certs-748183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-748183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:03.023389  328990 out.go:179] * Starting "embed-certs-748183" primary control-plane node in "embed-certs-748183" cluster
	I1102 13:37:03.024529  328990 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:03.025791  328990 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:03.026910  328990 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:03.026956  328990 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:03.026970  328990 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:03.027052  328990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:03.027089  328990 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:03.027112  328990 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:03.027247  328990 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/config.json ...
	I1102 13:37:03.057197  328990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:03.057222  328990 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:03.057235  328990 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:03.057263  328990 start.go:360] acquireMachinesLock for embed-certs-748183: {Name:mk1989e6a2b05b436baa97ffa77e53762b9b8b92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:03.057319  328990 start.go:364] duration metric: took 29.188µs to acquireMachinesLock for "embed-certs-748183"
	I1102 13:37:03.057347  328990 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:03.057352  328990 fix.go:54] fixHost starting: 
	I1102 13:37:03.057613  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:03.079889  328990 fix.go:112] recreateIfNeeded on embed-certs-748183: state=Stopped err=<nil>
	W1102 13:37:03.079925  328990 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 13:37:03.389634  324005 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501009898s
	I1102 13:37:03.393145  324005 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:37:03.393438  324005 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1102 13:37:03.393645  324005 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:37:03.393770  324005 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:37:05.401907  324005 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.008525137s
	I1102 13:37:06.054922  324005 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.661723218s
	W1102 13:37:02.261130  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:04.747693  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:06.748971  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:03.081460  328990 out.go:252] * Restarting existing docker container for "embed-certs-748183" ...
	I1102 13:37:03.081552  328990 cli_runner.go:164] Run: docker start embed-certs-748183
	I1102 13:37:03.385272  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:03.408267  328990 kic.go:430] container "embed-certs-748183" state is running.
	I1102 13:37:03.408720  328990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-748183
	I1102 13:37:03.433716  328990 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/config.json ...
	I1102 13:37:03.433997  328990 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:03.434076  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:03.456292  328990 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:03.456686  328990 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1102 13:37:03.456710  328990 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:03.457398  328990 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42286->127.0.0.1:33125: read: connection reset by peer
	I1102 13:37:06.597835  328990 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-748183
	
	I1102 13:37:06.597867  328990 ubuntu.go:182] provisioning hostname "embed-certs-748183"
	I1102 13:37:06.597944  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:06.616595  328990 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:06.616918  328990 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1102 13:37:06.616937  328990 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-748183 && echo "embed-certs-748183" | sudo tee /etc/hostname
	I1102 13:37:06.766967  328990 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-748183
	
	I1102 13:37:06.767067  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:06.785618  328990 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:06.785908  328990 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1102 13:37:06.785927  328990 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-748183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-748183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-748183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:06.925347  328990 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:06.925376  328990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:06.925407  328990 ubuntu.go:190] setting up certificates
	I1102 13:37:06.925421  328990 provision.go:84] configureAuth start
	I1102 13:37:06.925477  328990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-748183
	I1102 13:37:06.942938  328990 provision.go:143] copyHostCerts
	I1102 13:37:06.943000  328990 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:06.943019  328990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:06.943097  328990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:06.943215  328990 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:06.943226  328990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:06.943265  328990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:06.943349  328990 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:06.943360  328990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:06.943397  328990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:06.943466  328990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.embed-certs-748183 san=[127.0.0.1 192.168.103.2 embed-certs-748183 localhost minikube]
	I1102 13:37:07.200866  328990 provision.go:177] copyRemoteCerts
	I1102 13:37:07.200935  328990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:07.200980  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:07.221276  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:07.327126  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:07.347406  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:07.369199  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 13:37:07.389203  328990 provision.go:87] duration metric: took 463.76539ms to configureAuth
	I1102 13:37:07.389237  328990 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:07.389489  328990 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:07.389642  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:07.410274  328990 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:07.410537  328990 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1102 13:37:07.410592  328990 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:07.783436  328990 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:07.783463  328990 machine.go:97] duration metric: took 4.349446988s to provisionDockerMachine
	I1102 13:37:07.783476  328990 start.go:293] postStartSetup for "embed-certs-748183" (driver="docker")
	I1102 13:37:07.783489  328990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:07.783547  328990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:07.783632  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:07.803726  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:07.894701  324005 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501368551s
	I1102 13:37:07.908529  324005 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:37:07.918498  324005 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:37:07.930039  324005 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:37:07.930305  324005 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-066482 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:37:07.939390  324005 kubeadm.go:319] [bootstrap-token] Using token: 7247sw.8wuoqkq8ppeizt08
	I1102 13:37:07.941129  324005 out.go:252]   - Configuring RBAC rules ...
	I1102 13:37:07.941290  324005 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:37:07.944125  324005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:37:07.950466  324005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:37:07.953112  324005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:37:07.955631  324005 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:37:07.958020  324005 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:37:08.300773  324005 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:37:08.719052  324005 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:37:09.301038  324005 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:37:09.302277  324005 kubeadm.go:319] 
	I1102 13:37:09.302352  324005 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:37:09.302387  324005 kubeadm.go:319] 
	I1102 13:37:09.302500  324005 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:37:09.302512  324005 kubeadm.go:319] 
	I1102 13:37:09.302585  324005 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:37:09.302677  324005 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:37:09.302758  324005 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:37:09.302771  324005 kubeadm.go:319] 
	I1102 13:37:09.302845  324005 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:37:09.302853  324005 kubeadm.go:319] 
	I1102 13:37:09.302923  324005 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:37:09.302932  324005 kubeadm.go:319] 
	I1102 13:37:09.303009  324005 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:37:09.303118  324005 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:37:09.303211  324005 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:37:09.303238  324005 kubeadm.go:319] 
	I1102 13:37:09.303351  324005 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:37:09.303449  324005 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:37:09.303459  324005 kubeadm.go:319] 
	I1102 13:37:09.303595  324005 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7247sw.8wuoqkq8ppeizt08 \
	I1102 13:37:09.303738  324005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 \
	I1102 13:37:09.303772  324005 kubeadm.go:319] 	--control-plane 
	I1102 13:37:09.303777  324005 kubeadm.go:319] 
	I1102 13:37:09.303863  324005 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:37:09.303870  324005 kubeadm.go:319] 
	I1102 13:37:09.303959  324005 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7247sw.8wuoqkq8ppeizt08 \
	I1102 13:37:09.304110  324005 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7eb059e9b50572a47c407fd9716fa2111250cb604ca6a30e7d7b8f9ea4a6765 
	I1102 13:37:09.306905  324005 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1102 13:37:09.307065  324005 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:37:09.307104  324005 cni.go:84] Creating CNI manager for ""
	I1102 13:37:09.307116  324005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:09.309089  324005 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:37:07.905587  328990 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:07.909607  328990 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:07.909633  328990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:07.909643  328990 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:07.909689  328990 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:07.909778  328990 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:07.909862  328990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:07.919998  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:07.940190  328990 start.go:296] duration metric: took 156.699515ms for postStartSetup
	I1102 13:37:07.940273  328990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:07.940318  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:07.960556  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:08.063094  328990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:08.068747  328990 fix.go:56] duration metric: took 5.011387743s for fixHost
	I1102 13:37:08.068779  328990 start.go:83] releasing machines lock for "embed-certs-748183", held for 5.011447606s
	I1102 13:37:08.068846  328990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-748183
	I1102 13:37:08.090656  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:08.090725  328990 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:08.090736  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:08.090767  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:08.090790  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:08.090817  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:08.090876  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:08.090959  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:08.091015  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:08.115625  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:08.233137  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:08.251342  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:08.268797  328990 ssh_runner.go:195] Run: openssl version
	I1102 13:37:08.275501  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:08.284776  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:08.288587  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:08.288639  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:08.324909  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:08.338816  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:08.349684  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:08.353775  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:08.353847  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:08.391057  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:08.399757  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:08.408830  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:08.412876  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:08.412939  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:08.448508  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:08.457655  328990 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:08.462363  328990 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:08.466206  328990 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:08.466290  328990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:08.532389  328990 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:08.542243  328990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:08.582819  328990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:08.587808  328990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:08.587876  328990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:08.596241  328990 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:08.596267  328990 start.go:496] detecting cgroup driver to use...
	I1102 13:37:08.596301  328990 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:08.596359  328990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:08.610804  328990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:08.623391  328990 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:08.623445  328990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:08.638055  328990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:08.651408  328990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:08.744643  328990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:08.831889  328990 docker.go:234] disabling docker service ...
	I1102 13:37:08.831975  328990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:08.845962  328990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:08.858102  328990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:08.941764  328990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:09.029096  328990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:09.041798  328990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:09.056333  328990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:09.056389  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.065230  328990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:09.065292  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.074443  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.084084  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.092822  328990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:09.101719  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.110464  328990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.118498  328990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:09.127654  328990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:09.135238  328990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:09.142830  328990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:09.229268  328990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:09.345773  328990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:09.345842  328990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:09.350305  328990 start.go:564] Will wait 60s for crictl version
	I1102 13:37:09.350364  328990 ssh_runner.go:195] Run: which crictl
	I1102 13:37:09.354213  328990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:09.381468  328990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:09.381553  328990 ssh_runner.go:195] Run: crio --version
	I1102 13:37:09.412061  328990 ssh_runner.go:195] Run: crio --version
	I1102 13:37:09.451037  328990 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:09.452199  328990 cli_runner.go:164] Run: docker network inspect embed-certs-748183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:09.472422  328990 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:09.476856  328990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:09.488555  328990 kubeadm.go:884] updating cluster {Name:embed-certs-748183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-748183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:09.488724  328990 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:09.488796  328990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:09.527982  328990 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:09.528004  328990 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:09.528046  328990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:09.557526  328990 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:09.557550  328990 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:09.557560  328990 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:09.557715  328990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-748183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-748183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:09.557805  328990 ssh_runner.go:195] Run: crio config
	I1102 13:37:09.636480  328990 cni.go:84] Creating CNI manager for ""
	I1102 13:37:09.636507  328990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:09.636520  328990 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:09.636546  328990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-748183 NodeName:embed-certs-748183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:09.636800  328990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-748183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:09.636881  328990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:09.647178  328990 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:09.647256  328990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:09.658689  328990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1102 13:37:09.674276  328990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:09.687654  328990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1102 13:37:09.702107  328990 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:09.705881  328990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:09.716498  328990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:09.801373  328990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:09.830780  328990 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183 for IP: 192.168.103.2
	I1102 13:37:09.830803  328990 certs.go:195] generating shared ca certs ...
	I1102 13:37:09.830823  328990 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:09.830976  328990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:09.831030  328990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:09.831039  328990 certs.go:257] generating profile certs ...
	I1102 13:37:09.831156  328990 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/client.key
	I1102 13:37:09.831234  328990 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/apiserver.key.e77d2db0
	I1102 13:37:09.831291  328990 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/proxy-client.key
	I1102 13:37:09.831429  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:09.831470  328990 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:09.831486  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:09.831518  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:09.831574  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:09.831611  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:09.831663  328990 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:09.832409  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:09.852527  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:09.871316  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:09.891208  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:09.916018  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1102 13:37:09.934466  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:09.951857  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:09.968790  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/embed-certs-748183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:09.985537  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:10.003081  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:10.020689  328990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:10.039812  328990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:10.052751  328990 ssh_runner.go:195] Run: openssl version
	I1102 13:37:10.058915  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:10.067288  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:10.071200  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:10.071256  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:10.108734  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:10.116913  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:10.125535  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:10.129519  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:10.129594  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:10.164969  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:10.174530  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:10.183725  328990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:10.187714  328990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:10.187773  328990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:10.231933  328990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:10.241807  328990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:10.246796  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:10.283847  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:10.319181  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:10.364702  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:10.410835  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:10.462438  328990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:10.506073  328990 kubeadm.go:401] StartCluster: {Name:embed-certs-748183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-748183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:10.506165  328990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:10.506256  328990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:10.539949  328990 cri.go:89] found id: "92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75"
	I1102 13:37:10.539988  328990 cri.go:89] found id: "915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743"
	I1102 13:37:10.539994  328990 cri.go:89] found id: "7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00"
	I1102 13:37:10.539999  328990 cri.go:89] found id: "4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a"
	I1102 13:37:10.540011  328990 cri.go:89] found id: ""
	I1102 13:37:10.540085  328990 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:10.553184  328990 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:10Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:10.553273  328990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:10.562530  328990 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:10.562550  328990 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:10.562619  328990 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:10.570265  328990 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:10.571308  328990 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-748183" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:10.571865  328990 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-748183" cluster setting kubeconfig missing "embed-certs-748183" context setting]
	I1102 13:37:10.572734  328990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:10.574612  328990 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:10.583136  328990 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1102 13:37:10.583170  328990 kubeadm.go:602] duration metric: took 20.61422ms to restartPrimaryControlPlane
	I1102 13:37:10.583181  328990 kubeadm.go:403] duration metric: took 77.132057ms to StartCluster
	I1102 13:37:10.583222  328990 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:10.583284  328990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:10.584584  328990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:10.584929  328990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:10.585011  328990 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:10.585148  328990 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-748183"
	I1102 13:37:10.585163  328990 addons.go:70] Setting dashboard=true in profile "embed-certs-748183"
	I1102 13:37:10.585196  328990 addons.go:239] Setting addon dashboard=true in "embed-certs-748183"
	I1102 13:37:10.585196  328990 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1102 13:37:10.585211  328990 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:10.585179  328990 addons.go:70] Setting default-storageclass=true in profile "embed-certs-748183"
	I1102 13:37:10.585264  328990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-748183"
	I1102 13:37:10.585170  328990 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-748183"
	W1102 13:37:10.585304  328990 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:10.585343  328990 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:37:10.585247  328990 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:37:10.585653  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:10.585865  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:10.585918  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:10.587537  328990 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:10.588700  328990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:10.612166  328990 addons.go:239] Setting addon default-storageclass=true in "embed-certs-748183"
	W1102 13:37:10.612191  328990 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:10.612218  328990 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:37:10.612689  328990 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:37:10.614699  328990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:10.614756  328990 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:10.616593  328990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:10.616610  328990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:10.616659  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:10.617820  328990 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:09.310455  324005 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:37:09.314874  324005 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:37:09.314892  324005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:37:09.329853  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:37:09.563918  324005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:37:09.563979  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:09.564014  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-066482 minikube.k8s.io/updated_at=2025_11_02T13_37_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=newest-cni-066482 minikube.k8s.io/primary=true
	I1102 13:37:09.676427  324005 ops.go:34] apiserver oom_adj: -16
	I1102 13:37:09.676580  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:10.177398  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:10.677434  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:11.177309  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:11.677537  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 13:37:09.248685  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:11.249467  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:10.618792  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:10.618810  328990 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:10.618861  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:10.650037  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:10.656844  328990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:10.656866  328990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:10.656922  328990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:37:10.658059  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:10.683357  328990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:37:10.748447  328990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:10.765919  328990 node_ready.go:35] waiting up to 6m0s for node "embed-certs-748183" to be "Ready" ...
	I1102 13:37:10.773783  328990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:10.778275  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:10.778297  328990 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:10.796285  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:10.796310  328990 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:10.799847  328990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:10.813010  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:10.813037  328990 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:10.832455  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:10.832480  328990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:10.850155  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:10.850183  328990 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:10.865324  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:10.865348  328990 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:10.879294  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:10.879322  328990 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:10.892088  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:10.892114  328990 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:10.904735  328990 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:10.904757  328990 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:10.917005  328990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:12.028477  328990 node_ready.go:49] node "embed-certs-748183" is "Ready"
	I1102 13:37:12.028517  328990 node_ready.go:38] duration metric: took 1.262519381s for node "embed-certs-748183" to be "Ready" ...
	I1102 13:37:12.028535  328990 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:12.028606  328990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:12.586087  328990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.812225886s)
	I1102 13:37:12.586111  328990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.786230592s)
	I1102 13:37:12.586211  328990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.669180058s)
	I1102 13:37:12.586260  328990 api_server.go:72] duration metric: took 2.001290412s to wait for apiserver process to appear ...
	I1102 13:37:12.586358  328990 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:12.586373  328990 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1102 13:37:12.587898  328990 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-748183 addons enable metrics-server
	
	I1102 13:37:12.592304  328990 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:12.592328  328990 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:12.597598  328990 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:12.598622  328990 addons.go:515] duration metric: took 2.013628185s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:12.177666  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:12.676684  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:13.177363  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:13.677599  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:14.176714  324005 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:37:14.248735  324005 kubeadm.go:1114] duration metric: took 4.68482428s to wait for elevateKubeSystemPrivileges
	I1102 13:37:14.248772  324005 kubeadm.go:403] duration metric: took 17.842307218s to StartCluster
	I1102 13:37:14.248794  324005 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:14.248866  324005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:14.251196  324005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:14.251478  324005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:37:14.251483  324005 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:14.251555  324005 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:14.251677  324005 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:14.251709  324005 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	I1102 13:37:14.251717  324005 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:14.251745  324005 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:14.251750  324005 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:14.251784  324005 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:14.252176  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:14.252326  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:14.253539  324005 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:14.255031  324005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:14.275470  324005 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:14.276628  324005 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:14.276650  324005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:14.276720  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:14.278126  324005 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	I1102 13:37:14.278163  324005 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:14.278463  324005 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:14.303979  324005 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:14.304020  324005 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:14.304134  324005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:14.304993  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:14.331362  324005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:14.357502  324005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:37:14.400346  324005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:14.427249  324005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:14.446897  324005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:14.536109  324005 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1102 13:37:14.537766  324005 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:14.537831  324005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:14.748438  324005 api_server.go:72] duration metric: took 496.91606ms to wait for apiserver process to appear ...
	I1102 13:37:14.748467  324005 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:14.748487  324005 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:14.753663  324005 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:14.754415  324005 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:14.754441  324005 api_server.go:131] duration metric: took 5.966635ms to wait for apiserver health ...
	I1102 13:37:14.754451  324005 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:14.757082  324005 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:14.757118  324005 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:14.757128  324005 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:14.757134  324005 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running
	I1102 13:37:14.757139  324005 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:14.757148  324005 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:14.757157  324005 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running
	I1102 13:37:14.757165  324005 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:14.757175  324005 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:14.757185  324005 system_pods.go:74] duration metric: took 2.726115ms to wait for pod list to return data ...
	I1102 13:37:14.757197  324005 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:14.757270  324005 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:37:14.758453  324005 addons.go:515] duration metric: took 506.909397ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:37:14.761343  324005 default_sa.go:45] found service account: "default"
	I1102 13:37:14.761358  324005 default_sa.go:55] duration metric: took 4.156627ms for default service account to be created ...
	I1102 13:37:14.761374  324005 kubeadm.go:587] duration metric: took 509.859113ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:14.761389  324005 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:14.763492  324005 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:14.763511  324005 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:14.763523  324005 node_conditions.go:105] duration metric: took 2.129972ms to run NodePressure ...
	I1102 13:37:14.763533  324005 start.go:242] waiting for startup goroutines ...
	I1102 13:37:15.041337  324005 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-066482" context rescaled to 1 replicas
	I1102 13:37:15.041374  324005 start.go:247] waiting for cluster config update ...
	I1102 13:37:15.041385  324005 start.go:256] writing updated cluster config ...
	I1102 13:37:15.041666  324005 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:15.090551  324005 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:15.092781  324005 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.311602764Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.312532045Z" level=info msg="Running pod sandbox: kube-system/kindnet-schdw/POD" id=d1b245fb-fcfb-4b58-8201-d09195c3a0e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.312718738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.313436707Z" level=info msg="Ran pod sandbox 8b76dffad93224dd9c797fcbe6745517903226e53b22dcf1d4fa83c5121037d4 with infra container: kube-system/kube-proxy-fkp22/POD" id=7c995540-d152-43c0-9eb9-fd8be5b2531d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.314829723Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2d9a0d1c-6ca6-49f9-9dd3-b948c2f5d69d name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.316842678Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d1b245fb-fcfb-4b58-8201-d09195c3a0e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.316891691Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=12494625-6e32-49c0-ab42-548cd32c7910 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.319552772Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.320530512Z" level=info msg="Ran pod sandbox d43bb62b684e8469bf6b6a19017691074068f0629d501795ab71ea03f2c944a5 with infra container: kube-system/kindnet-schdw/POD" id=d1b245fb-fcfb-4b58-8201-d09195c3a0e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.323411547Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=35b05036-049f-4710-b206-fb7c71a3ed46 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.323490799Z" level=info msg="Creating container: kube-system/kube-proxy-fkp22/kube-proxy" id=836ed675-f652-41f5-96f8-a93fb6e4efd0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.3236313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.324758745Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bf75d16a-3591-4a38-886d-79868ae11bcc name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.329396021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.330075272Z" level=info msg="Creating container: kube-system/kindnet-schdw/kindnet-cni" id=4a0a9c8e-6d66-48a5-b452-d67279f3a1e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.330181858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.330096567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.334101115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.334579026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.432910194Z" level=info msg="Created container aefe0536ce18e12d830df488c3246b42eea0f927a245713b995b612963bdd57b: kube-system/kindnet-schdw/kindnet-cni" id=4a0a9c8e-6d66-48a5-b452-d67279f3a1e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.433912181Z" level=info msg="Starting container: aefe0536ce18e12d830df488c3246b42eea0f927a245713b995b612963bdd57b" id=1df85be1-b649-419d-9c86-8719d3831f8b name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.436628392Z" level=info msg="Started container" PID=1620 containerID=aefe0536ce18e12d830df488c3246b42eea0f927a245713b995b612963bdd57b description=kube-system/kindnet-schdw/kindnet-cni id=1df85be1-b649-419d-9c86-8719d3831f8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d43bb62b684e8469bf6b6a19017691074068f0629d501795ab71ea03f2c944a5
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.438977326Z" level=info msg="Created container 5e99a803abc0c7fd5bc2938ff06897af86b40b4b9242cd8e0e8579fd97dd294d: kube-system/kube-proxy-fkp22/kube-proxy" id=836ed675-f652-41f5-96f8-a93fb6e4efd0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.439813125Z" level=info msg="Starting container: 5e99a803abc0c7fd5bc2938ff06897af86b40b4b9242cd8e0e8579fd97dd294d" id=2f4439a2-d413-4f55-a0ba-4e79953d42ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:14 newest-cni-066482 crio[804]: time="2025-11-02T13:37:14.443275766Z" level=info msg="Started container" PID=1618 containerID=5e99a803abc0c7fd5bc2938ff06897af86b40b4b9242cd8e0e8579fd97dd294d description=kube-system/kube-proxy-fkp22/kube-proxy id=2f4439a2-d413-4f55-a0ba-4e79953d42ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b76dffad93224dd9c797fcbe6745517903226e53b22dcf1d4fa83c5121037d4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	aefe0536ce18e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   d43bb62b684e8       kindnet-schdw                               kube-system
	5e99a803abc0c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   8b76dffad9322       kube-proxy-fkp22                            kube-system
	8a32a38bb9b80       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   5cb21d981e16d       kube-scheduler-newest-cni-066482            kube-system
	ce30e05ac17b8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   704a8fe2f927a       kube-controller-manager-newest-cni-066482   kube-system
	94768fadceff1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   a3224582734e2       kube-apiserver-newest-cni-066482            kube-system
	7c4ae4e9b5413       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   fc8156c6e1228       etcd-newest-cni-066482                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-066482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-066482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-066482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_37_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:37:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-066482
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:08 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:08 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:08 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 13:37:08 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-066482
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dba0db4c-1d52-42f8-ac0c-77487a17adc5
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-066482                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-schdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-066482             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-066482    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-fkp22                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-066482             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-066482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-066482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-066482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-066482 event: Registered Node newest-cni-066482 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [7c4ae4e9b54135c2a408d1afdde3db1910dce77892e9fe4a358b012fb1bbb1c9] <==
	{"level":"warn","ts":"2025-11-02T13:37:05.444832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.452863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.459124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.465753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.471840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.477901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.484043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.490906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.499463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.509669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.515413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.521693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.527884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.534197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.540373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.546490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.552363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.558759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.564960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.572833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.579173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.590687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.597021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.603207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:05.652250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38054","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:37:16 up  1:19,  0 user,  load average: 3.99, 4.04, 2.66
	Linux newest-cni-066482 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aefe0536ce18e12d830df488c3246b42eea0f927a245713b995b612963bdd57b] <==
	I1102 13:37:14.696232       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:14.696514       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:37:14.696811       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:14.696837       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:14.696861       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:14.900134       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:14.900157       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:14.900171       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:14.993881       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:15.301128       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:15.301153       1 metrics.go:72] Registering metrics
	I1102 13:37:15.301211       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [94768fadceff1b62c50e1f467fcda3c0873c40f5d0a01680b24a8663558cd38b] <==
	I1102 13:37:06.108062       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:37:06.108098       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1102 13:37:06.113331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:06.113893       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 13:37:06.114843       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:37:06.119940       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:06.120226       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:06.299283       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:07.010645       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 13:37:07.014540       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 13:37:07.014560       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:07.464628       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:07.499302       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:07.613907       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 13:37:07.619745       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 13:37:07.621059       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:37:07.625296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:37:08.026367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:08.708081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:08.718167       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 13:37:08.725806       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:37:13.582100       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:13.588403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:13.979153       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 13:37:14.128267       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ce30e05ac17b8d848c885b67a0815391f0758e7a02bbda5745627c00cd8b6852] <==
	I1102 13:37:12.885398       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:37:12.889250       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-066482" podCIDRs=["10.42.0.0/24"]
	I1102 13:37:12.915639       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:37:12.917961       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:37:12.924604       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:37:12.924616       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:12.925066       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:37:12.925101       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:37:12.925728       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:37:12.925778       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:37:12.925854       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:37:12.925884       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 13:37:12.925899       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:37:12.925907       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:12.926745       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:37:12.928460       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:37:12.928494       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:12.929554       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:37:12.929849       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:37:12.937244       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:37:12.939185       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 13:37:13.023665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:13.023691       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:13.023703       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:13.039622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5e99a803abc0c7fd5bc2938ff06897af86b40b4b9242cd8e0e8579fd97dd294d] <==
	I1102 13:37:14.491160       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:14.572209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:14.673039       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:14.673090       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 13:37:14.673216       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:14.693231       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:14.693281       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:14.700014       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:14.700413       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:14.700436       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:14.702350       1 config.go:200] "Starting service config controller"
	I1102 13:37:14.702363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:14.702388       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:14.702394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:14.702407       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:14.702412       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:14.703414       1 config.go:309] "Starting node config controller"
	I1102 13:37:14.703423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:14.703430       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:14.803219       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:37:14.803248       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:37:14.803248       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8a32a38bb9b80e274985444f963de41af569cb92c1431da8143a6af3467ac344] <==
	E1102 13:37:06.053102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:37:06.053135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:37:06.053159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:37:06.053174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 13:37:06.053254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:37:06.053254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 13:37:06.053420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:37:06.053405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:37:06.053445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:37:06.053545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 13:37:06.053544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:37:06.859102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:37:06.867161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 13:37:06.887342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:37:06.897414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:37:06.901396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:37:06.960042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 13:37:07.038638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:37:07.085740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:37:07.089903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 13:37:07.120111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 13:37:07.212591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 13:37:07.223636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:37:07.308211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1102 13:37:07.650632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.522919    1363 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.559421    1363 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.559472    1363 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.559551    1363 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.559686    1363 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: E1102 13:37:09.566727    1363 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-066482\" already exists" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: E1102 13:37:09.569260    1363 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-066482\" already exists" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: E1102 13:37:09.569934    1363 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-066482\" already exists" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: E1102 13:37:09.570247    1363 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-066482\" already exists" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.604831    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-066482" podStartSLOduration=2.604808364 podStartE2EDuration="2.604808364s" podCreationTimestamp="2025-11-02 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:09.604508942 +0000 UTC m=+1.147916920" watchObservedRunningTime="2025-11-02 13:37:09.604808364 +0000 UTC m=+1.148216332"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.626928    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-066482" podStartSLOduration=2.626904488 podStartE2EDuration="2.626904488s" podCreationTimestamp="2025-11-02 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:09.6175171 +0000 UTC m=+1.160925070" watchObservedRunningTime="2025-11-02 13:37:09.626904488 +0000 UTC m=+1.170312479"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.627168    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-066482" podStartSLOduration=1.6271536580000001 podStartE2EDuration="1.627153658s" podCreationTimestamp="2025-11-02 13:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:09.62675388 +0000 UTC m=+1.170161875" watchObservedRunningTime="2025-11-02 13:37:09.627153658 +0000 UTC m=+1.170561627"
	Nov 02 13:37:09 newest-cni-066482 kubelet[1363]: I1102 13:37:09.647423    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-066482" podStartSLOduration=2.647398177 podStartE2EDuration="2.647398177s" podCreationTimestamp="2025-11-02 13:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:09.636468065 +0000 UTC m=+1.179876035" watchObservedRunningTime="2025-11-02 13:37:09.647398177 +0000 UTC m=+1.190806147"
	Nov 02 13:37:12 newest-cni-066482 kubelet[1363]: I1102 13:37:12.967971    1363 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 13:37:12 newest-cni-066482 kubelet[1363]: I1102 13:37:12.968668    1363 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065220    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-xtables-lock\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065276    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-lib-modules\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065315    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjz7f\" (UniqueName: \"kubernetes.io/projected/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-kube-api-access-hjz7f\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065403    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-kube-proxy\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065479    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-cni-cfg\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065585    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-xtables-lock\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065667    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-lib-modules\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.065697    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fb4\" (UniqueName: \"kubernetes.io/projected/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-kube-api-access-67fb4\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.599684    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fkp22" podStartSLOduration=1.599661157 podStartE2EDuration="1.599661157s" podCreationTimestamp="2025-11-02 13:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:14.599275088 +0000 UTC m=+6.142683073" watchObservedRunningTime="2025-11-02 13:37:14.599661157 +0000 UTC m=+6.143069128"
	Nov 02 13:37:14 newest-cni-066482 kubelet[1363]: I1102 13:37:14.599814    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-schdw" podStartSLOduration=1.599805115 podStartE2EDuration="1.599805115s" podCreationTimestamp="2025-11-02 13:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 13:37:14.587911115 +0000 UTC m=+6.131319087" watchObservedRunningTime="2025-11-02 13:37:14.599805115 +0000 UTC m=+6.143213085"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-066482 -n newest-cni-066482
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-066482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9knvp storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner: exit status 1 (71.646803ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9knvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-066482 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-066482 --alsologtostderr -v=1: exit status 80 (2.317002858s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-066482 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:37:32.937200  337538 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:32.937450  337538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:32.937459  337538 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:32.937464  337538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:32.937677  337538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:32.937909  337538 out.go:368] Setting JSON to false
	I1102 13:37:32.937940  337538 mustload.go:66] Loading cluster: newest-cni-066482
	I1102 13:37:32.939091  337538 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:32.939637  337538 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:32.957316  337538 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:32.957648  337538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:33.019526  337538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-02 13:37:33.009395497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:33.020228  337538 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-066482 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:37:33.022096  337538 out.go:179] * Pausing node newest-cni-066482 ... 
	I1102 13:37:33.023235  337538 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:33.023480  337538 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:33.023519  337538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:33.041146  337538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:33.141062  337538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:33.153296  337538 pause.go:52] kubelet running: true
	I1102 13:37:33.153387  337538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:33.298584  337538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:33.298694  337538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:33.372791  337538 cri.go:89] found id: "d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7"
	I1102 13:37:33.372819  337538 cri.go:89] found id: "505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323"
	I1102 13:37:33.372824  337538 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:33.372828  337538 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:33.372832  337538 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:33.372845  337538 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:33.372848  337538 cri.go:89] found id: ""
	I1102 13:37:33.372895  337538 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:33.384756  337538 retry.go:31] will retry after 336.894326ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:33Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:33.722313  337538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:33.737345  337538 pause.go:52] kubelet running: false
	I1102 13:37:33.737400  337538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:33.868881  337538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:33.868970  337538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:33.940415  337538 cri.go:89] found id: "d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7"
	I1102 13:37:33.940441  337538 cri.go:89] found id: "505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323"
	I1102 13:37:33.940447  337538 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:33.940451  337538 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:33.940455  337538 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:33.940460  337538 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:33.940463  337538 cri.go:89] found id: ""
	I1102 13:37:33.940510  337538 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:33.958507  337538 retry.go:31] will retry after 340.409773ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:33Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:34.300217  337538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:34.319061  337538 pause.go:52] kubelet running: false
	I1102 13:37:34.319126  337538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:34.476793  337538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:34.476876  337538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:34.559912  337538 cri.go:89] found id: "d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7"
	I1102 13:37:34.559936  337538 cri.go:89] found id: "505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323"
	I1102 13:37:34.559942  337538 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:34.559947  337538 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:34.559951  337538 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:34.559955  337538 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:34.559959  337538 cri.go:89] found id: ""
	I1102 13:37:34.560005  337538 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:34.575741  337538 retry.go:31] will retry after 336.554814ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:34Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:34.913114  337538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:34.929621  337538 pause.go:52] kubelet running: false
	I1102 13:37:34.929685  337538 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:35.089522  337538 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:35.089632  337538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:35.170913  337538 cri.go:89] found id: "d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7"
	I1102 13:37:35.170940  337538 cri.go:89] found id: "505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323"
	I1102 13:37:35.170947  337538 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:35.170953  337538 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:35.170957  337538 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:35.170962  337538 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:35.170966  337538 cri.go:89] found id: ""
	I1102 13:37:35.171016  337538 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:35.187631  337538 out.go:203] 
	W1102 13:37:35.188753  337538 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:37:35.188774  337538 out.go:285] * 
	* 
	W1102 13:37:35.194366  337538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:37:35.195464  337538 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-066482 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-066482
helpers_test.go:243: (dbg) docker inspect newest-cni-066482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	        "Created": "2025-11-02T13:36:51.061365338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334243,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:21.032155919Z",
	            "FinishedAt": "2025-11-02T13:37:19.568401188Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hosts",
	        "LogPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3-json.log",
	        "Name": "/newest-cni-066482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-066482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-066482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	                "LowerDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-066482",
	                "Source": "/var/lib/docker/volumes/newest-cni-066482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-066482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-066482",
	                "name.minikube.sigs.k8s.io": "newest-cni-066482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "321ee91cd3e7036da545c4bd151aa6f4afee74b740669f155c49e5760c0f1a9a",
	            "SandboxKey": "/var/run/docker/netns/321ee91cd3e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-066482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:6b:6a:ef:d5:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb7d140d6f6ce408658294ace6543d63c65a2cd98673247fb739d0124deecb8e",
	                    "EndpointID": "85aa3aff2248677a94b72dec338a6b6bce495eb46f07c926cd4e1ca605eb3912",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-066482",
	                        "2ae7f574b714"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482
E1102 13:37:35.353334   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482: exit status 2 (378.693576ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25
E1102 13:37:36.635030   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25: (1.157483184s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                                                                                               │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.815058041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.817547049Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0925a0e9-07bb-4e00-b132-5fbaac10ac00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.818131359Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=16378edb-f173-4577-a620-68011568e5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.81892156Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.81948379Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.819594906Z" level=info msg="Ran pod sandbox 9fe2b118660a2349ce7da87d04767c5b9b170a2c545a7e6a9b53b67492fa05e8 with infra container: kube-system/kube-proxy-fkp22/POD" id=0925a0e9-07bb-4e00-b132-5fbaac10ac00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.820257575Z" level=info msg="Ran pod sandbox d5cc96c6555e14b3d5f2b9d09ca394874de3e7bdb7f3c4c581f1cfd935091a1f with infra container: kube-system/kindnet-schdw/POD" id=16378edb-f173-4577-a620-68011568e5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.820540736Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd3e4a4c-9dd0-4f11-88ad-a7c7c7d97f62 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.821139415Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b14b9584-b166-4c74-a8e1-bc5b5ef550fe name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.821433544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3e93bf9-49ce-4d16-ada2-358b69604bef name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.82201265Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=72c72455-5131-481e-a931-74826fb84c6c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822372913Z" level=info msg="Creating container: kube-system/kube-proxy-fkp22/kube-proxy" id=ddab3503-2a46-4117-992d-77b56c00f8d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822495298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822929036Z" level=info msg="Creating container: kube-system/kindnet-schdw/kindnet-cni" id=0e1fc80d-08de-48e7-a215-eaf277c73a1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.823007965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.827436457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.828551144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.829019726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.829482066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.854993835Z" level=info msg="Created container d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7: kube-system/kindnet-schdw/kindnet-cni" id=0e1fc80d-08de-48e7-a215-eaf277c73a1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.855521424Z" level=info msg="Starting container: d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7" id=0b70370e-1c8f-427e-9d98-7d9b1d500d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.85745777Z" level=info msg="Started container" PID=1069 containerID=d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7 description=kube-system/kindnet-schdw/kindnet-cni id=0b70370e-1c8f-427e-9d98-7d9b1d500d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5cc96c6555e14b3d5f2b9d09ca394874de3e7bdb7f3c4c581f1cfd935091a1f
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.860655754Z" level=info msg="Created container 505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323: kube-system/kube-proxy-fkp22/kube-proxy" id=ddab3503-2a46-4117-992d-77b56c00f8d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.861138389Z" level=info msg="Starting container: 505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323" id=736b49cd-2171-43a8-b978-551e899f0f05 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.86420223Z" level=info msg="Started container" PID=1070 containerID=505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323 description=kube-system/kube-proxy-fkp22/kube-proxy id=736b49cd-2171-43a8-b978-551e899f0f05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fe2b118660a2349ce7da87d04767c5b9b170a2c545a7e6a9b53b67492fa05e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d125ac31bfe73       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   d5cc96c6555e1       kindnet-schdw                               kube-system
	505b724e29a6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   9fe2b118660a2       kube-proxy-fkp22                            kube-system
	a2d506030cda6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   091ca0ce409a5       etcd-newest-cni-066482                      kube-system
	9244b3749165c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   c6b8a9d9672e3       kube-apiserver-newest-cni-066482            kube-system
	119e599a978f8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   2d51507973994       kube-controller-manager-newest-cni-066482   kube-system
	b46475f69b265       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   605031e3d45cc       kube-scheduler-newest-cni-066482            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-066482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-066482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-066482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_37_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:37:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-066482
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:37:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-066482
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dba0db4c-1d52-42f8-ac0c-77487a17adc5
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-066482                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-schdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-066482             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-066482    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-fkp22                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-066482             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 21s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 28s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s              kubelet          Node newest-cni-066482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s              kubelet          Node newest-cni-066482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s              kubelet          Node newest-cni-066482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s              node-controller  Node newest-cni-066482 event: Registered Node newest-cni-066482 in Controller
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x9 over 9s)  kubelet          Node newest-cni-066482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet          Node newest-cni-066482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)  kubelet          Node newest-cni-066482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s               node-controller  Node newest-cni-066482 event: Registered Node newest-cni-066482 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9] <==
	{"level":"warn","ts":"2025-11-02T13:37:29.878222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.888915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.896730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.904846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.912180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.922261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.931190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.939405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.947197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.954282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.962716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.970731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.978432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.989074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.993790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.002421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.009720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.019165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.028860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.038521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.045895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.061860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.069295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.078758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.141865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:37:36 up  1:20,  0 user,  load average: 4.16, 4.08, 2.70
	Linux newest-cni-066482 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7] <==
	I1102 13:37:32.091964       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:32.092235       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:37:32.092369       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:32.092386       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:32.092407       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:32.392752       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:32.392879       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:32.392931       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:32.492431       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:32.850729       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:32.850763       1 metrics.go:72] Registering metrics
	I1102 13:37:32.850824       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c] <==
	I1102 13:37:30.731550       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:30.758730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:37:30.759185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:37:30.759261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:37:30.759426       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:30.759433       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:37:30.759446       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 13:37:30.759462       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:37:30.759784       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:37:30.759483       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:37:30.759931       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:30.765682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:37:30.767271       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:30.767434       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:37:30.974557       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:31.002181       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:31.019365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:31.026164       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:31.032208       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:31.069101       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.189.153"}
	I1102 13:37:31.078507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.216.60"}
	I1102 13:37:31.563037       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:34.025650       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:37:34.325465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:37:34.425026       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a] <==
	I1102 13:37:34.004055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:34.004078       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:34.004087       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:34.006388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:37:34.008622       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:37:34.019877       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:37:34.019944       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:34.019979       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:34.020191       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:37:34.020269       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:34.020382       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 13:37:34.021377       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:37:34.021855       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:34.024728       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:37:34.026932       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:37:34.028053       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:37:34.028093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:34.030954       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:34.033289       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:37:34.036161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:34.050129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:34.052958       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:37:34.055198       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:37:34.057849       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:37:34.063711       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323] <==
	I1102 13:37:31.897337       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:31.970249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:32.070983       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:32.071017       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 13:37:32.071103       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:32.090588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:32.090657       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:32.096179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:32.096629       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:32.096669       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:32.099748       1 config.go:309] "Starting node config controller"
	I1102 13:37:32.099775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:32.099784       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:32.099798       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:32.099810       1 config.go:200] "Starting service config controller"
	I1102 13:37:32.099816       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:32.099815       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:32.099848       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:32.099855       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:32.200198       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:32.200217       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:37:32.200240       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119] <==
	I1102 13:37:29.049547       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:37:30.582068       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:37:30.582102       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:37:30.582114       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:37:30.582124       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:37:30.634065       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:30.634096       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:30.637464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:30.637519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:30.638796       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:30.638886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 13:37:30.645400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 13:37:30.653224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:37:30.653342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:37:30.653711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:37:30.658170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:37:30.658381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:37:30.658496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:37:30.658580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:37:30.670827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:37:30.671038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:37:30.671141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:37:30.674117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1102 13:37:32.138080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.608885     697 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-066482\" not found" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.622694     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714265     697 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714372     697 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714418     697 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.715695     697 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.718691     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.737151     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-066482\" already exists" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.745137     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-066482\" already exists" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.745170     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.751217     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-066482\" already exists" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.751257     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.756997     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-066482\" already exists" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.757031     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.763029     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-066482\" already exists" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.506457     697 apiserver.go:52] "Watching apiserver"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.520757     697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546132     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-xtables-lock\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546257     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-cni-cfg\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546308     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-lib-modules\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546342     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-lib-modules\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546462     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-xtables-lock\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-066482 -n newest-cni-066482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-066482 -n newest-cni-066482: exit status 2 (426.205835ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-066482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t: exit status 1 (86.78201ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9knvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-n26qn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zc94t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-066482
helpers_test.go:243: (dbg) docker inspect newest-cni-066482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	        "Created": "2025-11-02T13:36:51.061365338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334243,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:21.032155919Z",
	            "FinishedAt": "2025-11-02T13:37:19.568401188Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/hosts",
	        "LogPath": "/var/lib/docker/containers/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3/2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3-json.log",
	        "Name": "/newest-cni-066482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-066482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-066482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ae7f574b714956382fdf7002249aec552c88f0b7892a197895e7860e9d908d3",
	                "LowerDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dc19d6848108e1c6461d9f8cb5eaa1159e1da62088df2eab2062c80cce7fd960/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-066482",
	                "Source": "/var/lib/docker/volumes/newest-cni-066482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-066482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-066482",
	                "name.minikube.sigs.k8s.io": "newest-cni-066482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "321ee91cd3e7036da545c4bd151aa6f4afee74b740669f155c49e5760c0f1a9a",
	            "SandboxKey": "/var/run/docker/netns/321ee91cd3e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-066482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:6b:6a:ef:d5:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb7d140d6f6ce408658294ace6543d63c65a2cd98673247fb739d0124deecb8e",
	                    "EndpointID": "85aa3aff2248677a94b72dec338a6b6bce495eb46f07c926cd4e1ca605eb3912",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-066482",
	                        "2ae7f574b714"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482: exit status 2 (382.813183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25
E1102 13:37:38.929802   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-066482 logs -n 25: (1.348081786s)
E1102 13:37:39.197691   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-560932                                                                                                                                                                                                               │ disable-driver-mounts-560932 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-978795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p no-preload-978795 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.815058041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.817547049Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0925a0e9-07bb-4e00-b132-5fbaac10ac00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.818131359Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=16378edb-f173-4577-a620-68011568e5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.81892156Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.81948379Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.819594906Z" level=info msg="Ran pod sandbox 9fe2b118660a2349ce7da87d04767c5b9b170a2c545a7e6a9b53b67492fa05e8 with infra container: kube-system/kube-proxy-fkp22/POD" id=0925a0e9-07bb-4e00-b132-5fbaac10ac00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.820257575Z" level=info msg="Ran pod sandbox d5cc96c6555e14b3d5f2b9d09ca394874de3e7bdb7f3c4c581f1cfd935091a1f with infra container: kube-system/kindnet-schdw/POD" id=16378edb-f173-4577-a620-68011568e5b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.820540736Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bd3e4a4c-9dd0-4f11-88ad-a7c7c7d97f62 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.821139415Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b14b9584-b166-4c74-a8e1-bc5b5ef550fe name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.821433544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3e93bf9-49ce-4d16-ada2-358b69604bef name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.82201265Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=72c72455-5131-481e-a931-74826fb84c6c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822372913Z" level=info msg="Creating container: kube-system/kube-proxy-fkp22/kube-proxy" id=ddab3503-2a46-4117-992d-77b56c00f8d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822495298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.822929036Z" level=info msg="Creating container: kube-system/kindnet-schdw/kindnet-cni" id=0e1fc80d-08de-48e7-a215-eaf277c73a1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.823007965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.827436457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.828551144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.829019726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.829482066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.854993835Z" level=info msg="Created container d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7: kube-system/kindnet-schdw/kindnet-cni" id=0e1fc80d-08de-48e7-a215-eaf277c73a1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.855521424Z" level=info msg="Starting container: d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7" id=0b70370e-1c8f-427e-9d98-7d9b1d500d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.85745777Z" level=info msg="Started container" PID=1069 containerID=d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7 description=kube-system/kindnet-schdw/kindnet-cni id=0b70370e-1c8f-427e-9d98-7d9b1d500d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5cc96c6555e14b3d5f2b9d09ca394874de3e7bdb7f3c4c581f1cfd935091a1f
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.860655754Z" level=info msg="Created container 505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323: kube-system/kube-proxy-fkp22/kube-proxy" id=ddab3503-2a46-4117-992d-77b56c00f8d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.861138389Z" level=info msg="Starting container: 505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323" id=736b49cd-2171-43a8-b978-551e899f0f05 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:31 newest-cni-066482 crio[548]: time="2025-11-02T13:37:31.86420223Z" level=info msg="Started container" PID=1070 containerID=505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323 description=kube-system/kube-proxy-fkp22/kube-proxy id=736b49cd-2171-43a8-b978-551e899f0f05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fe2b118660a2349ce7da87d04767c5b9b170a2c545a7e6a9b53b67492fa05e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d125ac31bfe73       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   d5cc96c6555e1       kindnet-schdw                               kube-system
	505b724e29a6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   9fe2b118660a2       kube-proxy-fkp22                            kube-system
	a2d506030cda6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   091ca0ce409a5       etcd-newest-cni-066482                      kube-system
	9244b3749165c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   c6b8a9d9672e3       kube-apiserver-newest-cni-066482            kube-system
	119e599a978f8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   2d51507973994       kube-controller-manager-newest-cni-066482   kube-system
	b46475f69b265       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   605031e3d45cc       kube-scheduler-newest-cni-066482            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-066482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-066482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-066482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_37_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:37:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-066482
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:37:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 13:37:30 +0000   Sun, 02 Nov 2025 13:37:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-066482
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dba0db4c-1d52-42f8-ac0c-77487a17adc5
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-066482                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-schdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-066482             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-066482    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-fkp22                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-066482             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node newest-cni-066482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-066482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node newest-cni-066482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node newest-cni-066482 event: Registered Node newest-cni-066482 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x9 over 11s)  kubelet          Node newest-cni-066482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-066482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet          Node newest-cni-066482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-066482 event: Registered Node newest-cni-066482 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9] <==
	{"level":"warn","ts":"2025-11-02T13:37:29.878222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.888915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.896730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.904846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.912180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.922261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.931190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.939405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.947197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.954282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.962716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.970731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.978432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.989074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:29.993790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.002421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.009720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.019165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.028860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.038521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.045895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.061860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.069295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.078758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:30.141865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:37:38 up  1:20,  0 user,  load average: 4.16, 4.08, 2.70
	Linux newest-cni-066482 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d125ac31bfe73d2c00731ea3bdc78b03a80df9eac8fa6edf34c336347a8c32e7] <==
	I1102 13:37:32.091964       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:32.092235       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 13:37:32.092369       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:32.092386       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:32.092407       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:32.392752       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:32.392879       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:32.392931       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:32.492431       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:32.850729       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:32.850763       1 metrics.go:72] Registering metrics
	I1102 13:37:32.850824       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c] <==
	I1102 13:37:30.731550       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:30.758730       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:37:30.759185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:37:30.759261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:37:30.759426       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:30.759433       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:37:30.759446       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 13:37:30.759462       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:37:30.759784       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:37:30.759483       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:37:30.759931       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:30.765682       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:37:30.767271       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:30.767434       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:37:30.974557       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:31.002181       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:31.019365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:31.026164       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:31.032208       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:31.069101       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.189.153"}
	I1102 13:37:31.078507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.216.60"}
	I1102 13:37:31.563037       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:34.025650       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:37:34.325465       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:37:34.425026       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a] <==
	I1102 13:37:34.004055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:34.004078       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:34.004087       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:34.006388       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:37:34.008622       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:37:34.019877       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:37:34.019944       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:34.019979       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:34.020191       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:37:34.020269       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:34.020382       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 13:37:34.021377       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:37:34.021855       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:34.024728       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:37:34.026932       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:37:34.028053       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:37:34.028093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:34.030954       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:34.033289       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:37:34.036161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:34.050129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:34.052958       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:37:34.055198       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:37:34.057849       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:37:34.063711       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [505b724e29a6b1022b03bee8e1a61f0dcfcdf8b50f28d390f325dfd8f3e1f323] <==
	I1102 13:37:31.897337       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:31.970249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:32.070983       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:32.071017       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 13:37:32.071103       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:32.090588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:32.090657       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:32.096179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:32.096629       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:32.096669       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:32.099748       1 config.go:309] "Starting node config controller"
	I1102 13:37:32.099775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:32.099784       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:32.099798       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:32.099810       1 config.go:200] "Starting service config controller"
	I1102 13:37:32.099816       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:32.099815       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:32.099848       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:32.099855       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:32.200198       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:32.200217       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:37:32.200240       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119] <==
	I1102 13:37:29.049547       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:37:30.582068       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:37:30.582102       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:37:30.582114       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:37:30.582124       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:37:30.634065       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:30.634096       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:30.637464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:30.637519       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:30.638796       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:30.638886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 13:37:30.645400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1102 13:37:30.653224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 13:37:30.653342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 13:37:30.653711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 13:37:30.658170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 13:37:30.658381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 13:37:30.658496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 13:37:30.658580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 13:37:30.670827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 13:37:30.671038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 13:37:30.671141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 13:37:30.674117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1102 13:37:32.138080       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.608885     697 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-066482\" not found" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.622694     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714265     697 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714372     697 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.714418     697 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.715695     697 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.718691     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.737151     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-066482\" already exists" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.745137     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-066482\" already exists" pod="kube-system/kube-controller-manager-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.745170     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.751217     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-066482\" already exists" pod="kube-system/kube-scheduler-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.751257     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.756997     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-066482\" already exists" pod="kube-system/etcd-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: I1102 13:37:30.757031     697 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:30 newest-cni-066482 kubelet[697]: E1102 13:37:30.763029     697 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-066482\" already exists" pod="kube-system/kube-apiserver-newest-cni-066482"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.506457     697 apiserver.go:52] "Watching apiserver"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.520757     697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546132     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-xtables-lock\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546257     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-cni-cfg\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546308     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-lib-modules\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546342     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85a24a6f-4f8c-4671-92f6-fbe43ab7bb10-lib-modules\") pod \"kube-proxy-fkp22\" (UID: \"85a24a6f-4f8c-4671-92f6-fbe43ab7bb10\") " pod="kube-system/kube-proxy-fkp22"
	Nov 02 13:37:31 newest-cni-066482 kubelet[697]: I1102 13:37:31.546462     697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74998f6e-2a7a-40d8-a5c2-a1142f69ee93-xtables-lock\") pod \"kindnet-schdw\" (UID: \"74998f6e-2a7a-40d8-a5c2-a1142f69ee93\") " pod="kube-system/kindnet-schdw"
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:37:33 newest-cni-066482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-066482 -n newest-cni-066482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-066482 -n newest-cni-066482: exit status 2 (321.930877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-066482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t: exit status 1 (61.981501ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9knvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-n26qn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-zc94t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-066482 describe pod coredns-66bc5c9577-9knvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-n26qn kubernetes-dashboard-855c9754f9-zc94t: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-978795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-978795 --alsologtostderr -v=1: exit status 80 (1.592316114s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-978795 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:37:45.467097  340089 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:45.467376  340089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:45.467386  340089 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:45.467390  340089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:45.467580  340089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:45.467791  340089 out.go:368] Setting JSON to false
	I1102 13:37:45.467809  340089 mustload.go:66] Loading cluster: no-preload-978795
	I1102 13:37:45.468780  340089 config.go:182] Loaded profile config "no-preload-978795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:45.469490  340089 cli_runner.go:164] Run: docker container inspect no-preload-978795 --format={{.State.Status}}
	I1102 13:37:45.503464  340089 host.go:66] Checking if "no-preload-978795" exists ...
	I1102 13:37:45.503857  340089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:45.563213  340089 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-02 13:37:45.553364526 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:45.563905  340089 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-978795 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:37:45.565732  340089 out.go:179] * Pausing node no-preload-978795 ... 
	I1102 13:37:45.566824  340089 host.go:66] Checking if "no-preload-978795" exists ...
	I1102 13:37:45.567067  340089 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:45.567125  340089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-978795
	I1102 13:37:45.587137  340089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33115 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/no-preload-978795/id_rsa Username:docker}
	I1102 13:37:45.688815  340089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:45.701210  340089 pause.go:52] kubelet running: true
	I1102 13:37:45.701292  340089 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:45.876251  340089 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:45.876356  340089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:45.939807  340089 cri.go:89] found id: "a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872"
	I1102 13:37:45.939830  340089 cri.go:89] found id: "41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9"
	I1102 13:37:45.939834  340089 cri.go:89] found id: "00e1b7154486fe063132557a594935f8ddd4344e0b78ebd768f66fc54e72cefb"
	I1102 13:37:45.939836  340089 cri.go:89] found id: "8279fbff65bbb7eeddd8cf2d2a8220d1e7e1e278aaf553e70450472f1a32cd21"
	I1102 13:37:45.939840  340089 cri.go:89] found id: "a53e288237c421817e13c84735208e1931104dd178dda81c3e30acbe2d0a7400"
	I1102 13:37:45.939845  340089 cri.go:89] found id: "d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b"
	I1102 13:37:45.939850  340089 cri.go:89] found id: "34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69"
	I1102 13:37:45.939854  340089 cri.go:89] found id: "42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181"
	I1102 13:37:45.939858  340089 cri.go:89] found id: "05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48"
	I1102 13:37:45.939875  340089 cri.go:89] found id: "70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6"
	I1102 13:37:45.939883  340089 cri.go:89] found id: "50fcd43f519f563b94c125de25dd0ac0b2df5a5a28d43437ec792a2453868dbd"
	I1102 13:37:45.939887  340089 cri.go:89] found id: ""
	I1102 13:37:45.939950  340089 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:45.951469  340089 retry.go:31] will retry after 349.711244ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:45Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:46.302145  340089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:46.315015  340089 pause.go:52] kubelet running: false
	I1102 13:37:46.315082  340089 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:46.451481  340089 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:46.451555  340089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:46.518683  340089 cri.go:89] found id: "a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872"
	I1102 13:37:46.518706  340089 cri.go:89] found id: "41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9"
	I1102 13:37:46.518710  340089 cri.go:89] found id: "00e1b7154486fe063132557a594935f8ddd4344e0b78ebd768f66fc54e72cefb"
	I1102 13:37:46.518713  340089 cri.go:89] found id: "8279fbff65bbb7eeddd8cf2d2a8220d1e7e1e278aaf553e70450472f1a32cd21"
	I1102 13:37:46.518716  340089 cri.go:89] found id: "a53e288237c421817e13c84735208e1931104dd178dda81c3e30acbe2d0a7400"
	I1102 13:37:46.518719  340089 cri.go:89] found id: "d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b"
	I1102 13:37:46.518721  340089 cri.go:89] found id: "34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69"
	I1102 13:37:46.518724  340089 cri.go:89] found id: "42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181"
	I1102 13:37:46.518726  340089 cri.go:89] found id: "05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48"
	I1102 13:37:46.518742  340089 cri.go:89] found id: "70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6"
	I1102 13:37:46.518744  340089 cri.go:89] found id: "50fcd43f519f563b94c125de25dd0ac0b2df5a5a28d43437ec792a2453868dbd"
	I1102 13:37:46.518747  340089 cri.go:89] found id: ""
	I1102 13:37:46.518783  340089 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:46.530580  340089 retry.go:31] will retry after 232.113343ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:46Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:46.762976  340089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:46.775726  340089 pause.go:52] kubelet running: false
	I1102 13:37:46.775785  340089 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:37:46.915441  340089 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:37:46.915534  340089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:37:46.981403  340089 cri.go:89] found id: "a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872"
	I1102 13:37:46.981425  340089 cri.go:89] found id: "41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9"
	I1102 13:37:46.981431  340089 cri.go:89] found id: "00e1b7154486fe063132557a594935f8ddd4344e0b78ebd768f66fc54e72cefb"
	I1102 13:37:46.981436  340089 cri.go:89] found id: "8279fbff65bbb7eeddd8cf2d2a8220d1e7e1e278aaf553e70450472f1a32cd21"
	I1102 13:37:46.981440  340089 cri.go:89] found id: "a53e288237c421817e13c84735208e1931104dd178dda81c3e30acbe2d0a7400"
	I1102 13:37:46.981446  340089 cri.go:89] found id: "d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b"
	I1102 13:37:46.981450  340089 cri.go:89] found id: "34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69"
	I1102 13:37:46.981454  340089 cri.go:89] found id: "42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181"
	I1102 13:37:46.981458  340089 cri.go:89] found id: "05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48"
	I1102 13:37:46.981466  340089 cri.go:89] found id: "70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6"
	I1102 13:37:46.981470  340089 cri.go:89] found id: "50fcd43f519f563b94c125de25dd0ac0b2df5a5a28d43437ec792a2453868dbd"
	I1102 13:37:46.981483  340089 cri.go:89] found id: ""
	I1102 13:37:46.981523  340089 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:37:46.995591  340089 out.go:203] 
	W1102 13:37:46.996733  340089 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:37:46.996749  340089 out.go:285] * 
	* 
	W1102 13:37:47.000923  340089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:37:47.002130  340089 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-978795 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-978795
helpers_test.go:243: (dbg) docker inspect no-preload-978795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	        "Created": "2025-11-02T13:35:24.534535218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:36:42.473270325Z",
	            "FinishedAt": "2025-11-02T13:36:41.51136344Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e-json.log",
	        "Name": "/no-preload-978795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-978795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-978795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	                "LowerDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-978795",
	                "Source": "/var/lib/docker/volumes/no-preload-978795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-978795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-978795",
	                "name.minikube.sigs.k8s.io": "no-preload-978795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69c1c9465fae5b6b35479ca6bf37ee803235e9e4ce6518ee37b6698aa0a87d63",
	            "SandboxKey": "/var/run/docker/netns/69c1c9465fae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-978795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:0d:1e:e1:a3:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11ed3231c38232a3af5735052e72b0c429b6b7e978e401e7b612ef36fc53303a",
	                    "EndpointID": "e75a164c6ccbddc1839435a511dfd22da363e2808ea38463599b80251666580a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-978795",
	                        "f2b4d88c9fa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795: exit status 2 (326.024267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-978795 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-978795 logs -n 25: (1.073523082s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.616909128Z" level=info msg="Started container" PID=1767 containerID=0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper id=d580727c-0cba-44e5-a55a-152461ff9924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e4117bc7f194c7a6c39c8a6f3a84c95e29ad34fccd9aab4fe0a18f15c59f0fd
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.683616881Z" level=info msg="Removing container: c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3" id=1d0e8d8f-a952-41c1-a05d-b17b188fb1b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.694326321Z" level=info msg="Removed container c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=1d0e8d8f-a952-41c1-a05d-b17b188fb1b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.706176443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2495f365-1ec0-43a3-8dce-82fa820efe02 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.707192341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61cd609a-816a-4eea-a6d3-d8106a508e85 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.708324907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8a788ff6-da32-4149-a9f0-528d0a8db867 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.708471921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713481145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713704327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fe0aaf16441a81ca82b47034e20f32e28d34d31f5a767b89cafcadbfd70fe0dd/merged/etc/passwd: no such file or directory"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713744314Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fe0aaf16441a81ca82b47034e20f32e28d34d31f5a767b89cafcadbfd70fe0dd/merged/etc/group: no such file or directory"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.714069873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.738162128Z" level=info msg="Created container a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872: kube-system/storage-provisioner/storage-provisioner" id=8a788ff6-da32-4149-a9f0-528d0a8db867 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.738879963Z" level=info msg="Starting container: a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872" id=220aff17-079e-4f0f-9795-d87416ffbba5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.741171853Z" level=info msg="Started container" PID=1781 containerID=a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872 description=kube-system/storage-provisioner/storage-provisioner id=220aff17-079e-4f0f-9795-d87416ffbba5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b804fba945e2d5927d3e49af96ed05ed9c53af5c472d79e813b7574db956664f
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.574360529Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=986ec599-aaa1-4fbf-bc8e-05a242c1b330 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.575256926Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0379bd3-2c10-4674-b43e-bbf26d33ab4f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.576248767Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=63418450-4f26-4a9b-8ae0-154706e4a4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.576398837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.582248303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.582724468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.606511629Z" level=info msg="Created container 70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=63418450-4f26-4a9b-8ae0-154706e4a4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.607210026Z" level=info msg="Starting container: 70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6" id=be31a4f3-3155-44a6-8cd2-d751a29eba6a name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.6089641Z" level=info msg="Started container" PID=1842 containerID=70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper id=be31a4f3-3155-44a6-8cd2-d751a29eba6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e4117bc7f194c7a6c39c8a6f3a84c95e29ad34fccd9aab4fe0a18f15c59f0fd
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.763851404Z" level=info msg="Removing container: 0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb" id=7cb462a0-e31b-4995-aa19-1624b9bb1c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.856839685Z" level=info msg="Removed container 0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=7cb462a0-e31b-4995-aa19-1624b9bb1c4f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	70dfe3162f618       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 seconds ago       Exited              dashboard-metrics-scraper   3                   5e4117bc7f194       dashboard-metrics-scraper-6ffb444bf9-g58tx   kubernetes-dashboard
	a73f6a76ff53b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b804fba945e2d       storage-provisioner                          kube-system
	50fcd43f519f5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   cd249615a072e       kubernetes-dashboard-855c9754f9-hnwjb        kubernetes-dashboard
	9b098d972b6bc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   1f5baf157ff7a       busybox                                      default
	41eca953e7391       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   b804fba945e2d       storage-provisioner                          kube-system
	00e1b7154486f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   0a39ce8c11aaa       coredns-66bc5c9577-2dtpc                     kube-system
	8279fbff65bbb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   b89ec3cebc4a6       kube-proxy-rmkmd                             kube-system
	a53e288237c42       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   6efec60d92bdf       kindnet-d8n4x                                kube-system
	d75465a215601       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   7c240bd52ae45       etcd-no-preload-978795                       kube-system
	34d6d45b3c166       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   6432b67c75c64       kube-apiserver-no-preload-978795             kube-system
	42ae707ea436f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   7415536fb5b42       kube-controller-manager-no-preload-978795    kube-system
	05fdfa04c3355       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   ca88872b31f78       kube-scheduler-no-preload-978795             kube-system
	
	
	==> coredns [00e1b7154486fe063132557a594935f8ddd4344e0b78ebd768f66fc54e72cefb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59139 - 21047 "HINFO IN 1169371270981510889.7365533718010229387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042025875s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-978795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-978795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-978795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:35:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-978795
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:36:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-978795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                886d43f2-0cc9-4abe-b8a0-71a0f502a9fe
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-2dtpc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-978795                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-d8n4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-978795              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-978795     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-rmkmd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-978795              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g58tx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hnwjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node no-preload-978795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node no-preload-978795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node no-preload-978795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-978795 event: Registered Node no-preload-978795 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-978795 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node no-preload-978795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node no-preload-978795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node no-preload-978795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-978795 event: Registered Node no-preload-978795 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b] <==
	{"level":"warn","ts":"2025-11-02T13:36:53.826419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.834233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.842422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.848832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.854491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.860545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.867181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.874503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.880416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.886811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.897678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.904468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.910864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.917958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.926978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.934980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.942198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.949046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.962363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.968434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.974711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.980676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.996791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:54.002543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:54.008970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:37:48 up  1:20,  0 user,  load average: 3.59, 3.96, 2.68
	Linux no-preload-978795 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a53e288237c421817e13c84735208e1931104dd178dda81c3e30acbe2d0a7400] <==
	I1102 13:36:55.186672       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:36:55.186933       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1102 13:36:55.187115       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:36:55.187133       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:36:55.187160       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:36:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:36:55.386853       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:36:55.386891       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:36:55.386907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:36:55.387459       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:36:55.787467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:36:55.787494       1 metrics.go:72] Registering metrics
	I1102 13:36:55.787553       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:05.387645       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:05.387701       1 main.go:301] handling current node
	I1102 13:37:15.387786       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:15.387829       1 main.go:301] handling current node
	I1102 13:37:25.387865       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:25.387935       1 main.go:301] handling current node
	I1102 13:37:35.389241       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:35.389288       1 main.go:301] handling current node
	I1102 13:37:45.395838       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:45.395875       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69] <==
	I1102 13:36:54.556162       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:36:54.556352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:36:54.556530       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:36:54.556598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:36:54.557403       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 13:36:54.557557       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:36:54.557643       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:36:54.557803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:36:54.558895       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:36:54.558820       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:36:54.567870       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 13:36:54.569901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:36:54.576441       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:54.609030       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:36:54.615608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:36:54.906026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:36:54.967214       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:36:54.990066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:36:55.000152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:36:55.043831       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.105"}
	I1102 13:36:55.057094       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.117.24"}
	I1102 13:36:55.459537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:36:58.260016       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:58.312773       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:36:58.410275       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181] <==
	I1102 13:36:57.906980       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:36:57.906993       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:36:57.907027       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:36:57.907017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:36:57.907048       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:36:57.907080       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:36:57.907081       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:36:57.907138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:57.907138       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:36:57.907146       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:36:57.907146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:36:57.907155       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:36:57.907205       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:36:57.907520       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:36:57.907082       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:36:57.907830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:36:57.907840       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:36:57.908507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:36:57.908528       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:36:57.911252       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 13:36:57.911292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:57.912797       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:36:57.915096       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:36:57.917528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:36:57.931736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8279fbff65bbb7eeddd8cf2d2a8220d1e7e1e278aaf553e70450472f1a32cd21] <==
	I1102 13:36:54.990999       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:36:55.051753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:36:55.152397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:36:55.152441       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1102 13:36:55.152587       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:36:55.170540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:36:55.170618       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:36:55.175640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:36:55.176031       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:36:55.176068       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:55.177337       1 config.go:200] "Starting service config controller"
	I1102 13:36:55.177360       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:36:55.177360       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:36:55.177381       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:36:55.177408       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:36:55.177413       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:36:55.177428       1 config.go:309] "Starting node config controller"
	I1102 13:36:55.177445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:36:55.277837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:36:55.277873       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:36:55.277847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:36:55.277893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48] <==
	I1102 13:36:52.529111       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:36:54.476422       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:36:54.476471       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:36:54.476484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:36:54.476493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:36:54.530479       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:36:54.530605       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:54.532743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:36:54.532844       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:36:54.533258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:36:54.533614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:36:54.633886       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: I1102 13:37:02.634418     739 scope.go:117] "RemoveContainer" containerID="af3bef5723435b36c2b9bee47bb13046291906aa82131203969eac05b1605d87"
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: I1102 13:37:02.634778     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: E1102 13:37:02.634953     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:03 no-preload-978795 kubelet[739]: I1102 13:37:03.637875     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:03 no-preload-978795 kubelet[739]: E1102 13:37:03.638016     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: I1102 13:37:04.642255     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: E1102 13:37:04.642435     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: I1102 13:37:04.653441     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hnwjb" podStartSLOduration=1.084994272 podStartE2EDuration="6.653421222s" podCreationTimestamp="2025-11-02 13:36:58 +0000 UTC" firstStartedPulling="2025-11-02 13:36:58.869636265 +0000 UTC m=+7.419947577" lastFinishedPulling="2025-11-02 13:37:04.438063214 +0000 UTC m=+12.988374527" observedRunningTime="2025-11-02 13:37:04.653164092 +0000 UTC m=+13.203475414" watchObservedRunningTime="2025-11-02 13:37:04.653421222 +0000 UTC m=+13.203732543"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.573662     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.682112     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.682337     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: E1102 13:37:17.682536     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:22 no-preload-978795 kubelet[739]: I1102 13:37:22.669403     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:22 no-preload-978795 kubelet[739]: E1102 13:37:22.669603     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:25 no-preload-978795 kubelet[739]: I1102 13:37:25.705700     739 scope.go:117] "RemoveContainer" containerID="41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9"
	Nov 02 13:37:34 no-preload-978795 kubelet[739]: I1102 13:37:34.572979     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:34 no-preload-978795 kubelet[739]: E1102 13:37:34.573217     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.573894     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.762509     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.762863     739 scope.go:117] "RemoveContainer" containerID="70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: E1102 13:37:45.763059     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:45 no-preload-978795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:37:45 no-preload-978795 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:37:45 no-preload-978795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:37:45 no-preload-978795 systemd[1]: kubelet.service: Consumed 1.729s CPU time.
	
	
	==> kubernetes-dashboard [50fcd43f519f563b94c125de25dd0ac0b2df5a5a28d43437ec792a2453868dbd] <==
	2025/11/02 13:37:04 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:04 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:04 Using secret token for csrf signing
	2025/11/02 13:37:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:04 Generating JWE encryption key
	2025/11/02 13:37:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:04 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:04 Creating in-cluster Sidecar client
	2025/11/02 13:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:04 Serving insecurely on HTTP port: 9090
	2025/11/02 13:37:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:04 Starting overwatch
	
	
	==> storage-provisioner [41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9] <==
	I1102 13:36:54.950011       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:37:24.954952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872] <==
	I1102 13:37:25.754478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:37:25.765902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:37:25.765976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:37:25.768423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:29.224727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:33.484737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:37.084347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:40.138848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.161610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.165930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:37:43.166085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:37:43.166226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f711ec74-9f5e-4d88-b29d-598bc126b1de", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe became leader
	I1102 13:37:43.166300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe!
	W1102 13:37:43.168122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.171469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:37:43.267369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe!
	W1102 13:37:45.174233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:45.178291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.181722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.185893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978795 -n no-preload-978795: exit status 2 (329.632354ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-978795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-978795
helpers_test.go:243: (dbg) docker inspect no-preload-978795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	        "Created": "2025-11-02T13:35:24.534535218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:36:42.473270325Z",
	            "FinishedAt": "2025-11-02T13:36:41.51136344Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/hosts",
	        "LogPath": "/var/lib/docker/containers/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e/f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e-json.log",
	        "Name": "/no-preload-978795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-978795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-978795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2b4d88c9fa8197fef580b5fd65b1cd1fb8f94d40b629965e71b3f3b2cb2490e",
	                "LowerDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58d103c5728d5b1dccf079047f64a5a74eb9d503e4de657d95f7c931a913230a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-978795",
	                "Source": "/var/lib/docker/volumes/no-preload-978795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-978795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-978795",
	                "name.minikube.sigs.k8s.io": "no-preload-978795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69c1c9465fae5b6b35479ca6bf37ee803235e9e4ce6518ee37b6698aa0a87d63",
	            "SandboxKey": "/var/run/docker/netns/69c1c9465fae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-978795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:0d:1e:e1:a3:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "11ed3231c38232a3af5735052e72b0c429b6b7e978e401e7b612ef36fc53303a",
	                    "EndpointID": "e75a164c6ccbddc1839435a511dfd22da363e2808ea38463599b80251666580a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-978795",
	                        "f2b4d88c9fa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795: exit status 2 (340.433416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-978795 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-978795 logs -n 25: (1.119020279s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-054159 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ pause   │ -p old-k8s-version-054159 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-748183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.634797  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:46.133702  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.616909128Z" level=info msg="Started container" PID=1767 containerID=0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper id=d580727c-0cba-44e5-a55a-152461ff9924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e4117bc7f194c7a6c39c8a6f3a84c95e29ad34fccd9aab4fe0a18f15c59f0fd
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.683616881Z" level=info msg="Removing container: c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3" id=1d0e8d8f-a952-41c1-a05d-b17b188fb1b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:17 no-preload-978795 crio[592]: time="2025-11-02T13:37:17.694326321Z" level=info msg="Removed container c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=1d0e8d8f-a952-41c1-a05d-b17b188fb1b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.706176443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2495f365-1ec0-43a3-8dce-82fa820efe02 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.707192341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61cd609a-816a-4eea-a6d3-d8106a508e85 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.708324907Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8a788ff6-da32-4149-a9f0-528d0a8db867 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.708471921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713481145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713704327Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fe0aaf16441a81ca82b47034e20f32e28d34d31f5a767b89cafcadbfd70fe0dd/merged/etc/passwd: no such file or directory"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.713744314Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fe0aaf16441a81ca82b47034e20f32e28d34d31f5a767b89cafcadbfd70fe0dd/merged/etc/group: no such file or directory"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.714069873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.738162128Z" level=info msg="Created container a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872: kube-system/storage-provisioner/storage-provisioner" id=8a788ff6-da32-4149-a9f0-528d0a8db867 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.738879963Z" level=info msg="Starting container: a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872" id=220aff17-079e-4f0f-9795-d87416ffbba5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:25 no-preload-978795 crio[592]: time="2025-11-02T13:37:25.741171853Z" level=info msg="Started container" PID=1781 containerID=a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872 description=kube-system/storage-provisioner/storage-provisioner id=220aff17-079e-4f0f-9795-d87416ffbba5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b804fba945e2d5927d3e49af96ed05ed9c53af5c472d79e813b7574db956664f
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.574360529Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=986ec599-aaa1-4fbf-bc8e-05a242c1b330 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.575256926Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0379bd3-2c10-4674-b43e-bbf26d33ab4f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.576248767Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=63418450-4f26-4a9b-8ae0-154706e4a4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.576398837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.582248303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.582724468Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.606511629Z" level=info msg="Created container 70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=63418450-4f26-4a9b-8ae0-154706e4a4b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.607210026Z" level=info msg="Starting container: 70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6" id=be31a4f3-3155-44a6-8cd2-d751a29eba6a name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.6089641Z" level=info msg="Started container" PID=1842 containerID=70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper id=be31a4f3-3155-44a6-8cd2-d751a29eba6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e4117bc7f194c7a6c39c8a6f3a84c95e29ad34fccd9aab4fe0a18f15c59f0fd
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.763851404Z" level=info msg="Removing container: 0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb" id=7cb462a0-e31b-4995-aa19-1624b9bb1c4f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:45 no-preload-978795 crio[592]: time="2025-11-02T13:37:45.856839685Z" level=info msg="Removed container 0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx/dashboard-metrics-scraper" id=7cb462a0-e31b-4995-aa19-1624b9bb1c4f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	70dfe3162f618       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   5e4117bc7f194       dashboard-metrics-scraper-6ffb444bf9-g58tx   kubernetes-dashboard
	a73f6a76ff53b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b804fba945e2d       storage-provisioner                          kube-system
	50fcd43f519f5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   cd249615a072e       kubernetes-dashboard-855c9754f9-hnwjb        kubernetes-dashboard
	9b098d972b6bc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   1f5baf157ff7a       busybox                                      default
	41eca953e7391       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   b804fba945e2d       storage-provisioner                          kube-system
	00e1b7154486f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   0a39ce8c11aaa       coredns-66bc5c9577-2dtpc                     kube-system
	8279fbff65bbb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   b89ec3cebc4a6       kube-proxy-rmkmd                             kube-system
	a53e288237c42       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   6efec60d92bdf       kindnet-d8n4x                                kube-system
	d75465a215601       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   7c240bd52ae45       etcd-no-preload-978795                       kube-system
	34d6d45b3c166       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   6432b67c75c64       kube-apiserver-no-preload-978795             kube-system
	42ae707ea436f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   7415536fb5b42       kube-controller-manager-no-preload-978795    kube-system
	05fdfa04c3355       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   ca88872b31f78       kube-scheduler-no-preload-978795             kube-system
	
	
	==> coredns [00e1b7154486fe063132557a594935f8ddd4344e0b78ebd768f66fc54e72cefb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59139 - 21047 "HINFO IN 1169371270981510889.7365533718010229387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042025875s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-978795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-978795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-978795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:35:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-978795
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:37:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:35:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:37:25 +0000   Sun, 02 Nov 2025 13:36:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-978795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                886d43f2-0cc9-4abe-b8a0-71a0f502a9fe
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-2dtpc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-978795                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-d8n4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-978795              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-978795     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-rmkmd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-978795              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g58tx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hnwjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node no-preload-978795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node no-preload-978795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node no-preload-978795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s               node-controller  Node no-preload-978795 event: Registered Node no-preload-978795 in Controller
	  Normal  NodeReady                98s                kubelet          Node no-preload-978795 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node no-preload-978795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node no-preload-978795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node no-preload-978795 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-978795 event: Registered Node no-preload-978795 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [d75465a215601ad6902284a8f4ac503bad1e462f3234ddee3675f0f0f025f32b] <==
	{"level":"warn","ts":"2025-11-02T13:36:53.826419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.834233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.842422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.848832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.854491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.860545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.867181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.874503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.880416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.886811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.897678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.904468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.910864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.917958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.926978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.934980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.942198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.949046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.962363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.968434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.974711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.980676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:53.996791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:54.002543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:36:54.008970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:37:50 up  1:20,  0 user,  load average: 3.38, 3.91, 2.67
	Linux no-preload-978795 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a53e288237c421817e13c84735208e1931104dd178dda81c3e30acbe2d0a7400] <==
	I1102 13:36:55.186672       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:36:55.186933       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1102 13:36:55.187115       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:36:55.187133       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:36:55.187160       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:36:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:36:55.386853       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:36:55.386891       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:36:55.386907       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:36:55.387459       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:36:55.787467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:36:55.787494       1 metrics.go:72] Registering metrics
	I1102 13:36:55.787553       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:05.387645       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:05.387701       1 main.go:301] handling current node
	I1102 13:37:15.387786       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:15.387829       1 main.go:301] handling current node
	I1102 13:37:25.387865       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:25.387935       1 main.go:301] handling current node
	I1102 13:37:35.389241       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:35.389288       1 main.go:301] handling current node
	I1102 13:37:45.395838       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1102 13:37:45.395875       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34d6d45b3c166e3ece7cae55497eada59f4b0a2911a5e1fda5cfa3e653f11a69] <==
	I1102 13:36:54.556162       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:36:54.556352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:36:54.556530       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:36:54.556598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 13:36:54.557403       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 13:36:54.557557       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 13:36:54.557643       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:36:54.557803       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:36:54.558895       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:36:54.558820       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:36:54.567870       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 13:36:54.569901       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:36:54.576441       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:36:54.609030       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:36:54.615608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:36:54.906026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:36:54.967214       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:36:54.990066       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:36:55.000152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:36:55.043831       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.105"}
	I1102 13:36:55.057094       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.117.24"}
	I1102 13:36:55.459537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:36:58.260016       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:36:58.312773       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:36:58.410275       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [42ae707ea436fef32dc405c69f4b8a2094bf96e5cb62e2fb5a4f97d5c5f87181] <==
	I1102 13:36:57.906980       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:36:57.906993       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:36:57.907027       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:36:57.907017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:36:57.907048       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:36:57.907080       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:36:57.907081       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:36:57.907138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:36:57.907138       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:36:57.907146       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:36:57.907146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:36:57.907155       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:36:57.907205       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:36:57.907520       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:36:57.907082       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:36:57.907830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:36:57.907840       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:36:57.908507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:36:57.908528       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:36:57.911252       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 13:36:57.911292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:36:57.912797       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 13:36:57.915096       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:36:57.917528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:36:57.931736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8279fbff65bbb7eeddd8cf2d2a8220d1e7e1e278aaf553e70450472f1a32cd21] <==
	I1102 13:36:54.990999       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:36:55.051753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:36:55.152397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:36:55.152441       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1102 13:36:55.152587       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:36:55.170540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:36:55.170618       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:36:55.175640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:36:55.176031       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:36:55.176068       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:55.177337       1 config.go:200] "Starting service config controller"
	I1102 13:36:55.177360       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:36:55.177360       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:36:55.177381       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:36:55.177408       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:36:55.177413       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:36:55.177428       1 config.go:309] "Starting node config controller"
	I1102 13:36:55.177445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:36:55.277837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:36:55.277873       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:36:55.277847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:36:55.277893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [05fdfa04c33553bae9ce98eabd014d2c5fe0f3155fff9f5518fc306c67872c48] <==
	I1102 13:36:52.529111       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:36:54.476422       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:36:54.476471       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:36:54.476484       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:36:54.476493       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:36:54.530479       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:36:54.530605       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:36:54.532743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:36:54.532844       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:36:54.533258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:36:54.533614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:36:54.633886       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: I1102 13:37:02.634418     739 scope.go:117] "RemoveContainer" containerID="af3bef5723435b36c2b9bee47bb13046291906aa82131203969eac05b1605d87"
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: I1102 13:37:02.634778     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:02 no-preload-978795 kubelet[739]: E1102 13:37:02.634953     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:03 no-preload-978795 kubelet[739]: I1102 13:37:03.637875     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:03 no-preload-978795 kubelet[739]: E1102 13:37:03.638016     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: I1102 13:37:04.642255     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: E1102 13:37:04.642435     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:04 no-preload-978795 kubelet[739]: I1102 13:37:04.653441     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hnwjb" podStartSLOduration=1.084994272 podStartE2EDuration="6.653421222s" podCreationTimestamp="2025-11-02 13:36:58 +0000 UTC" firstStartedPulling="2025-11-02 13:36:58.869636265 +0000 UTC m=+7.419947577" lastFinishedPulling="2025-11-02 13:37:04.438063214 +0000 UTC m=+12.988374527" observedRunningTime="2025-11-02 13:37:04.653164092 +0000 UTC m=+13.203475414" watchObservedRunningTime="2025-11-02 13:37:04.653421222 +0000 UTC m=+13.203732543"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.573662     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.682112     739 scope.go:117] "RemoveContainer" containerID="c6ff419f7c011d0a6d083b2af1c0ad2a475615cc2b9da8d7560f040d9a1890c3"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: I1102 13:37:17.682337     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:17 no-preload-978795 kubelet[739]: E1102 13:37:17.682536     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:22 no-preload-978795 kubelet[739]: I1102 13:37:22.669403     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:22 no-preload-978795 kubelet[739]: E1102 13:37:22.669603     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:25 no-preload-978795 kubelet[739]: I1102 13:37:25.705700     739 scope.go:117] "RemoveContainer" containerID="41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9"
	Nov 02 13:37:34 no-preload-978795 kubelet[739]: I1102 13:37:34.572979     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:34 no-preload-978795 kubelet[739]: E1102 13:37:34.573217     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.573894     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.762509     739 scope.go:117] "RemoveContainer" containerID="0e77dc9f95a7ed4f9dc5b1257fadea8f09584abdb359062fe4018fac599683eb"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: I1102 13:37:45.762863     739 scope.go:117] "RemoveContainer" containerID="70dfe3162f618f62b2c5d06ccdd44e000f798419a2105ab94355cb811cef8cd6"
	Nov 02 13:37:45 no-preload-978795 kubelet[739]: E1102 13:37:45.763059     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g58tx_kubernetes-dashboard(468cbb57-b576-49f2-83f0-4ba78ae62a72)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g58tx" podUID="468cbb57-b576-49f2-83f0-4ba78ae62a72"
	Nov 02 13:37:45 no-preload-978795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:37:45 no-preload-978795 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:37:45 no-preload-978795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:37:45 no-preload-978795 systemd[1]: kubelet.service: Consumed 1.729s CPU time.
	
	
	==> kubernetes-dashboard [50fcd43f519f563b94c125de25dd0ac0b2df5a5a28d43437ec792a2453868dbd] <==
	2025/11/02 13:37:04 Starting overwatch
	2025/11/02 13:37:04 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:04 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:04 Using secret token for csrf signing
	2025/11/02 13:37:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:04 Generating JWE encryption key
	2025/11/02 13:37:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:04 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:04 Creating in-cluster Sidecar client
	2025/11/02 13:37:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:04 Serving insecurely on HTTP port: 9090
	2025/11/02 13:37:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41eca953e73915c5487a90eacb76fd485c68d0c1a2f13cf2a8df0205bdc80ac9] <==
	I1102 13:36:54.950011       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:37:24.954952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a73f6a76ff53b733881343080969a41157d4e38c47e3c02ac95e3ecec4cb6872] <==
	I1102 13:37:25.754478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:37:25.765902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:37:25.765976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:37:25.768423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:29.224727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:33.484737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:37.084347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:40.138848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.161610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.165930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:37:43.166085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:37:43.166226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f711ec74-9f5e-4d88-b29d-598bc126b1de", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe became leader
	I1102 13:37:43.166300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe!
	W1102 13:37:43.168122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:43.171469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:37:43.267369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-978795_26a11bab-726b-4109-ba35-f3830f0f29fe!
	W1102 13:37:45.174233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:45.178291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.181722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.185893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:49.189350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:49.199112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978795 -n no-preload-978795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-978795 -n no-preload-978795: exit status 2 (327.187296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-978795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-748183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-748183 --alsologtostderr -v=1: exit status 80 (2.362589499s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-748183 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:38:02.808659  342350 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:38:02.808992  342350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:02.809006  342350 out.go:374] Setting ErrFile to fd 2...
	I1102 13:38:02.809012  342350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:02.809282  342350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:38:02.809734  342350 out.go:368] Setting JSON to false
	I1102 13:38:02.809785  342350 mustload.go:66] Loading cluster: embed-certs-748183
	I1102 13:38:02.810289  342350 config.go:182] Loaded profile config "embed-certs-748183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:38:02.810971  342350 cli_runner.go:164] Run: docker container inspect embed-certs-748183 --format={{.State.Status}}
	I1102 13:38:02.831489  342350 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:38:02.831822  342350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:38:02.892749  342350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-02 13:38:02.878876237 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:38:02.893399  342350 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-748183 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:38:02.896241  342350 out.go:179] * Pausing node embed-certs-748183 ... 
	I1102 13:38:02.897887  342350 host.go:66] Checking if "embed-certs-748183" exists ...
	I1102 13:38:02.898199  342350 ssh_runner.go:195] Run: systemctl --version
	I1102 13:38:02.898249  342350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-748183
	I1102 13:38:02.916069  342350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/embed-certs-748183/id_rsa Username:docker}
	I1102 13:38:03.014221  342350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:03.026235  342350 pause.go:52] kubelet running: true
	I1102 13:38:03.026297  342350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:03.187457  342350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:03.187542  342350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:03.253602  342350 cri.go:89] found id: "99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b"
	I1102 13:38:03.253622  342350 cri.go:89] found id: "8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48"
	I1102 13:38:03.253626  342350 cri.go:89] found id: "d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e"
	I1102 13:38:03.253629  342350 cri.go:89] found id: "e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056"
	I1102 13:38:03.253632  342350 cri.go:89] found id: "08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	I1102 13:38:03.253635  342350 cri.go:89] found id: "92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75"
	I1102 13:38:03.253638  342350 cri.go:89] found id: "915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743"
	I1102 13:38:03.253640  342350 cri.go:89] found id: "7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00"
	I1102 13:38:03.253643  342350 cri.go:89] found id: "4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a"
	I1102 13:38:03.253648  342350 cri.go:89] found id: "f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	I1102 13:38:03.253651  342350 cri.go:89] found id: "1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e"
	I1102 13:38:03.253653  342350 cri.go:89] found id: ""
	I1102 13:38:03.253689  342350 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:03.265494  342350 retry.go:31] will retry after 266.91569ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:03Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:03.533024  342350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:03.558467  342350 pause.go:52] kubelet running: false
	I1102 13:38:03.558536  342350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:03.700836  342350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:03.700924  342350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:03.767004  342350 cri.go:89] found id: "99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b"
	I1102 13:38:03.767025  342350 cri.go:89] found id: "8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48"
	I1102 13:38:03.767028  342350 cri.go:89] found id: "d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e"
	I1102 13:38:03.767031  342350 cri.go:89] found id: "e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056"
	I1102 13:38:03.767046  342350 cri.go:89] found id: "08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	I1102 13:38:03.767050  342350 cri.go:89] found id: "92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75"
	I1102 13:38:03.767052  342350 cri.go:89] found id: "915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743"
	I1102 13:38:03.767055  342350 cri.go:89] found id: "7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00"
	I1102 13:38:03.767057  342350 cri.go:89] found id: "4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a"
	I1102 13:38:03.767069  342350 cri.go:89] found id: "f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	I1102 13:38:03.767072  342350 cri.go:89] found id: "1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e"
	I1102 13:38:03.767074  342350 cri.go:89] found id: ""
	I1102 13:38:03.767117  342350 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:03.778845  342350 retry.go:31] will retry after 200.302431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:03Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:03.980271  342350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:03.993895  342350 pause.go:52] kubelet running: false
	I1102 13:38:03.993957  342350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:04.139688  342350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:04.139760  342350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:04.207345  342350 cri.go:89] found id: "99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b"
	I1102 13:38:04.207374  342350 cri.go:89] found id: "8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48"
	I1102 13:38:04.207379  342350 cri.go:89] found id: "d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e"
	I1102 13:38:04.207382  342350 cri.go:89] found id: "e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056"
	I1102 13:38:04.207385  342350 cri.go:89] found id: "08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	I1102 13:38:04.207389  342350 cri.go:89] found id: "92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75"
	I1102 13:38:04.207391  342350 cri.go:89] found id: "915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743"
	I1102 13:38:04.207394  342350 cri.go:89] found id: "7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00"
	I1102 13:38:04.207396  342350 cri.go:89] found id: "4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a"
	I1102 13:38:04.207407  342350 cri.go:89] found id: "f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	I1102 13:38:04.207410  342350 cri.go:89] found id: "1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e"
	I1102 13:38:04.207412  342350 cri.go:89] found id: ""
	I1102 13:38:04.207452  342350 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:04.219709  342350 retry.go:31] will retry after 643.746205ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:04Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:04.864656  342350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:04.877957  342350 pause.go:52] kubelet running: false
	I1102 13:38:04.878018  342350 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:05.018453  342350 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:05.018533  342350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:05.085987  342350 cri.go:89] found id: "99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b"
	I1102 13:38:05.086006  342350 cri.go:89] found id: "8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48"
	I1102 13:38:05.086010  342350 cri.go:89] found id: "d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e"
	I1102 13:38:05.086013  342350 cri.go:89] found id: "e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056"
	I1102 13:38:05.086016  342350 cri.go:89] found id: "08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	I1102 13:38:05.086019  342350 cri.go:89] found id: "92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75"
	I1102 13:38:05.086021  342350 cri.go:89] found id: "915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743"
	I1102 13:38:05.086023  342350 cri.go:89] found id: "7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00"
	I1102 13:38:05.086026  342350 cri.go:89] found id: "4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a"
	I1102 13:38:05.086041  342350 cri.go:89] found id: "f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	I1102 13:38:05.086045  342350 cri.go:89] found id: "1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e"
	I1102 13:38:05.086048  342350 cri.go:89] found id: ""
	I1102 13:38:05.086098  342350 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:05.099815  342350 out.go:203] 
	W1102 13:38:05.100975  342350 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:38:05.100989  342350 out.go:285] * 
	* 
	W1102 13:38:05.105110  342350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:38:05.106347  342350 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-748183 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-748183
helpers_test.go:243: (dbg) docker inspect embed-certs-748183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	        "Created": "2025-11-02T13:35:52.708051752Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:03.110993535Z",
	            "FinishedAt": "2025-11-02T13:37:01.699734434Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hosts",
	        "LogPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6-json.log",
	        "Name": "/embed-certs-748183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-748183:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-748183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	                "LowerDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-748183",
	                "Source": "/var/lib/docker/volumes/embed-certs-748183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-748183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-748183",
	                "name.minikube.sigs.k8s.io": "embed-certs-748183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f96ec663de9609b2e699ecf36991d2f130acd465d56bf7acdb7082122201a9a",
	            "SandboxKey": "/var/run/docker/netns/8f96ec663de9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-748183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:eb:5f:c7:76:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e27916e6204d80d9f3ecde4dc1f7e05cab435dec08a0139421fe16b2b896e8b",
	                    "EndpointID": "3fb1ce7183eac01450479cda1644a14ea98577807d0980fa6ebd7d2b5fd617ae",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-748183",
	                        "a897616b7925"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183: exit status 2 (318.24268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25: (1.091293094s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ embed-certs-748183 image list --format=json                                                                                                                                                                                                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p embed-certs-748183 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.634797  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:46.133702  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:45.684695  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.183904  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.633463  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:49.633961  328990 pod_ready.go:94] pod "coredns-66bc5c9577-vpq66" is "Ready"
	I1102 13:37:49.633983  328990 pod_ready.go:86] duration metric: took 36.006114822s for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.636373  328990 pod_ready.go:83] waiting for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.640305  328990 pod_ready.go:94] pod "etcd-embed-certs-748183" is "Ready"
	I1102 13:37:49.640326  328990 pod_ready.go:86] duration metric: took 3.933112ms for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.642169  328990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.645917  328990 pod_ready.go:94] pod "kube-apiserver-embed-certs-748183" is "Ready"
	I1102 13:37:49.645933  328990 pod_ready.go:86] duration metric: took 3.743148ms for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.647713  328990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.832391  328990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748183" is "Ready"
	I1102 13:37:49.832415  328990 pod_ready.go:86] duration metric: took 184.682932ms for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.032477  328990 pod_ready.go:83] waiting for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.432219  328990 pod_ready.go:94] pod "kube-proxy-pg8nt" is "Ready"
	I1102 13:37:50.432252  328990 pod_ready.go:86] duration metric: took 399.749991ms for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.632021  328990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032263  328990 pod_ready.go:94] pod "kube-scheduler-embed-certs-748183" is "Ready"
	I1102 13:37:51.032285  328990 pod_ready.go:86] duration metric: took 400.23928ms for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032297  328990 pod_ready.go:40] duration metric: took 37.407986415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:51.078471  328990 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:51.080252  328990 out.go:179] * Done! kubectl is now configured to use "embed-certs-748183" cluster and "default" namespace by default
	W1102 13:37:50.684482  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:52.684813  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:55.183972  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:57.684208  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:00.183283  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:02.184008  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.67936101Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.6828204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.68284281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.924838148Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4648dd56-fe23-4cd5-8603-aaed5ee411ee name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.929322765Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5bb1e3a2-2aa9-4f96-bcef-be824010bd65 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.935445015Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=00a5407d-652e-4074-b405-867c18c5e51d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.935681963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.947008076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.947590886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.995977446Z" level=info msg="Created container f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=00a5407d-652e-4074-b405-867c18c5e51d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.996702371Z" level=info msg="Starting container: f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71" id=b383bc96-a6ca-4a11-9416-dd30864d1410 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.999049212Z" level=info msg="Started container" PID=1771 containerID=f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper id=b383bc96-a6ca-4a11-9416-dd30864d1410 name=/runtime.v1.RuntimeService/StartContainer sandboxID=473df30c6502e80b8647383ea9b909db07a669059203c34b3cd74af5f9fb65fb
	Nov 02 13:37:38 embed-certs-748183 crio[591]: time="2025-11-02T13:37:38.029620485Z" level=info msg="Removing container: c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623" id=7ff4b281-32b7-43ae-a50c-6d8009560e9f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:38 embed-certs-748183 crio[591]: time="2025-11-02T13:37:38.041699557Z" level=info msg="Removed container c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=7ff4b281-32b7-43ae-a50c-6d8009560e9f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.047272757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2046be2-b0c3-494b-90c8-583406e135f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.048224203Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a98d6694-eb56-41b7-8793-d967514083fa name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.049336053Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22d754af-87b8-4df2-ad38-b9d34fd71d2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.049468452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.053835385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054010783Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/64135ffa976b4acda2936164de473d3e1816f7b236152f11748b19c1ae9da0e7/merged/etc/passwd: no such file or directory"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054038307Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/64135ffa976b4acda2936164de473d3e1816f7b236152f11748b19c1ae9da0e7/merged/etc/group: no such file or directory"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054269804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.080824687Z" level=info msg="Created container 99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b: kube-system/storage-provisioner/storage-provisioner" id=22d754af-87b8-4df2-ad38-b9d34fd71d2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.081429757Z" level=info msg="Starting container: 99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b" id=64c1ec71-00b0-4df7-bd5a-2f1f8611805d name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.083115071Z" level=info msg="Started container" PID=1789 containerID=99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b description=kube-system/storage-provisioner/storage-provisioner id=64c1ec71-00b0-4df7-bd5a-2f1f8611805d name=/runtime.v1.RuntimeService/StartContainer sandboxID=190fdd55a8ebba440ab40f8474250de33a900831867c4033e44fed5135587019
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	99a40572d11f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   190fdd55a8ebb       storage-provisioner                          kube-system
	f64639b390fd2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   473df30c6502e       dashboard-metrics-scraper-6ffb444bf9-p8zfx   kubernetes-dashboard
	1e7e496e5f29b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   1615cdf662002       kubernetes-dashboard-855c9754f9-t4hjh        kubernetes-dashboard
	8b58b034d001e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   de4960307c563       coredns-66bc5c9577-vpq66                     kube-system
	9f476797359bc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   dc17844db4ac6       busybox                                      default
	d9bd80a8cd406       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   e49a45b27e948       kube-proxy-pg8nt                             kube-system
	e8c35dcf7d68a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   ca7797c1846b3       kindnet-9zwww                                kube-system
	08ed3a888e107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   190fdd55a8ebb       storage-provisioner                          kube-system
	92c81ac32663f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   d98bc50d3d88f       kube-apiserver-embed-certs-748183            kube-system
	915e447acc04f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   d2abdcb5e9d65       kube-scheduler-embed-certs-748183            kube-system
	7ce1beed8bfec       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   2f8d17544e010       etcd-embed-certs-748183                      kube-system
	4f580374d7075       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   fe927e35d3e0f       kube-controller-manager-embed-certs-748183   kube-system
	
	
	==> coredns [8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47051 - 11717 "HINFO IN 942979073962769649.6887173836007951737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.016157486s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-748183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-748183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-748183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-748183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:38:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-748183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b44fc6a8-f48d-4728-a7f6-4178f12db103
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-vpq66                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-748183                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-9zwww                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-748183             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-748183    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-pg8nt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-748183             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p8zfx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-t4hjh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-748183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-748183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-748183 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-748183 event: Registered Node embed-certs-748183 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-748183 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node embed-certs-748183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node embed-certs-748183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node embed-certs-748183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-748183 event: Registered Node embed-certs-748183 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00] <==
	{"level":"warn","ts":"2025-11-02T13:37:11.373578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.379660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.388211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.394520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.401334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.407617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.413643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.420160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.428198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.434798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.445702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.451850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.459824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.467229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.477963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.484145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.490901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.497064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.503824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.510206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.517113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.536312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.542550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.548910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.602617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:38:06 up  1:20,  0 user,  load average: 2.63, 3.72, 2.63
	Linux embed-certs-748183 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056] <==
	I1102 13:37:13.556755       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:13.556994       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1102 13:37:13.557148       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:13.557163       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:13.557187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:13.662743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:13.662786       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:13.662808       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:13.663156       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:13.965036       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:13.965061       1 metrics.go:72] Registering metrics
	I1102 13:37:13.965127       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:23.663295       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:23.663342       1 main.go:301] handling current node
	I1102 13:37:33.667061       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:33.667092       1 main.go:301] handling current node
	I1102 13:37:43.663386       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:43.663417       1 main.go:301] handling current node
	I1102 13:37:53.666419       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:53.666450       1 main.go:301] handling current node
	I1102 13:38:03.668137       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:38:03.668172       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75] <==
	I1102 13:37:12.091040       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:37:12.091051       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:12.091097       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:37:12.091112       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:37:12.091734       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:37:12.091076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:37:12.092209       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:12.096959       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 13:37:12.098430       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:37:12.107667       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 13:37:12.116035       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1102 13:37:12.116064       1 policy_source.go:240] refreshing policies
	I1102 13:37:12.116192       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:12.365658       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:12.394864       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:12.412168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:12.418846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:12.425086       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:12.455805       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.36.46"}
	I1102 13:37:12.475649       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.201.208"}
	I1102 13:37:12.992830       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:14.971855       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:14.971900       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:15.371875       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:37:15.571558       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a] <==
	I1102 13:37:14.945702       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:14.947863       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:37:14.950155       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:37:14.951316       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:14.967711       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 13:37:14.967736       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:37:14.967793       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:37:14.967849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:37:14.967851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:37:14.967853       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:37:14.968180       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:14.967852       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:37:14.969520       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:37:14.969648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:37:14.973858       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:14.973872       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:14.973877       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:14.976027       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:14.978461       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:14.980609       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 13:37:14.983136       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:14.986129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:37:14.987625       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:37:14.990010       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:37:14.995692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e] <==
	I1102 13:37:13.323661       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:13.385282       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:13.485504       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:13.485545       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1102 13:37:13.485704       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:13.503650       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:13.503696       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:13.508947       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:13.509416       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:13.509436       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:13.510867       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:13.510893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:13.510916       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:13.510922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:13.510926       1 config.go:200] "Starting service config controller"
	I1102 13:37:13.510945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:13.510934       1 config.go:309] "Starting node config controller"
	I1102 13:37:13.510970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:13.510977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:13.611858       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:13.611900       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:37:13.611914       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743] <==
	I1102 13:37:11.237163       1 serving.go:386] Generated self-signed cert in-memory
	I1102 13:37:12.055080       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:12.055122       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:12.062741       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 13:37:12.062785       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 13:37:12.062809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:12.062831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.062834       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:12.062840       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.063368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:12.063483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:37:12.163543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.163559       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 13:37:12.163554       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:15 embed-certs-748183 kubelet[746]: I1102 13:37:15.541914     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78st7\" (UniqueName: \"kubernetes.io/projected/5163c067-aafb-41eb-bfce-05f4754d5cbc-kube-api-access-78st7\") pod \"kubernetes-dashboard-855c9754f9-t4hjh\" (UID: \"5163c067-aafb-41eb-bfce-05f4754d5cbc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t4hjh"
	Nov 02 13:37:15 embed-certs-748183 kubelet[746]: I1102 13:37:15.541959     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-p8zfx\" (UID: \"8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx"
	Nov 02 13:37:17 embed-certs-748183 kubelet[746]: I1102 13:37:17.959759     746 scope.go:117] "RemoveContainer" containerID="e7358d22aa329f0e59f233488a2023262be71f5a37cf71c64dfe24eaf731c9c6"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: I1102 13:37:18.965107     746 scope.go:117] "RemoveContainer" containerID="e7358d22aa329f0e59f233488a2023262be71f5a37cf71c64dfe24eaf731c9c6"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: I1102 13:37:18.965242     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: E1102 13:37:18.965446     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: I1102 13:37:19.168761     746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: I1102 13:37:19.968937     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: E1102 13:37:19.969131     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:21 embed-certs-748183 kubelet[746]: I1102 13:37:21.985632     746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t4hjh" podStartSLOduration=1.598274577 podStartE2EDuration="6.985611896s" podCreationTimestamp="2025-11-02 13:37:15 +0000 UTC" firstStartedPulling="2025-11-02 13:37:15.786075234 +0000 UTC m=+5.952860109" lastFinishedPulling="2025-11-02 13:37:21.173412552 +0000 UTC m=+11.340197428" observedRunningTime="2025-11-02 13:37:21.985491841 +0000 UTC m=+12.152276715" watchObservedRunningTime="2025-11-02 13:37:21.985611896 +0000 UTC m=+12.152396757"
	Nov 02 13:37:24 embed-certs-748183 kubelet[746]: I1102 13:37:24.964699     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:24 embed-certs-748183 kubelet[746]: E1102 13:37:24.964879     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:36 embed-certs-748183 kubelet[746]: I1102 13:37:36.923251     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: I1102 13:37:38.027476     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: I1102 13:37:38.027728     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: E1102 13:37:38.027944     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: I1102 13:37:44.046890     746 scope.go:117] "RemoveContainer" containerID="08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: I1102 13:37:44.965670     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: E1102 13:37:44.965849     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:56 embed-certs-748183 kubelet[746]: I1102 13:37:56.923476     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:56 embed-certs-748183 kubelet[746]: E1102 13:37:56.924002     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: kubelet.service: Consumed 1.691s CPU time.
	
	
	==> kubernetes-dashboard [1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e] <==
	2025/11/02 13:37:21 Starting overwatch
	2025/11/02 13:37:21 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:21 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:21 Using secret token for csrf signing
	2025/11/02 13:37:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:21 Generating JWE encryption key
	2025/11/02 13:37:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:21 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:21 Creating in-cluster Sidecar client
	2025/11/02 13:37:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:21 Serving insecurely on HTTP port: 9090
	2025/11/02 13:37:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe] <==
	I1102 13:37:13.287494       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:37:43.289667       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b] <==
	I1102 13:37:44.094850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:37:44.101906       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:37:44.101953       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:37:44.104181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.559585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:51.819535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:55.418296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:58.472348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.494230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.498561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:01.498723       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:38:01.498849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876!
	I1102 13:38:01.498854       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5836755-5bb9-4f0c-9c57-d7cfd1b93802", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876 became leader
	W1102 13:38:01.500723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.504324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:01.599120       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876!
	W1102 13:38:03.507289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:03.510932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:05.514083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:05.518508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748183 -n embed-certs-748183: exit status 2 (325.305363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-748183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-748183
helpers_test.go:243: (dbg) docker inspect embed-certs-748183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	        "Created": "2025-11-02T13:35:52.708051752Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:03.110993535Z",
	            "FinishedAt": "2025-11-02T13:37:01.699734434Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hostname",
	        "HostsPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/hosts",
	        "LogPath": "/var/lib/docker/containers/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6/a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6-json.log",
	        "Name": "/embed-certs-748183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-748183:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-748183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a897616b792541134df618e5684102cfcb4980edef88a3b8af1c709b0252dab6",
	                "LowerDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26a34f14e4f106afcb51afefb6434f95fd70e049cfae28604e567abe0d4716e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-748183",
	                "Source": "/var/lib/docker/volumes/embed-certs-748183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-748183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-748183",
	                "name.minikube.sigs.k8s.io": "embed-certs-748183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f96ec663de9609b2e699ecf36991d2f130acd465d56bf7acdb7082122201a9a",
	            "SandboxKey": "/var/run/docker/netns/8f96ec663de9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-748183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:eb:5f:c7:76:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e27916e6204d80d9f3ecde4dc1f7e05cab435dec08a0139421fe16b2b896e8b",
	                    "EndpointID": "3fb1ce7183eac01450479cda1644a14ea98577807d0980fa6ebd7d2b5fd617ae",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-748183",
	                        "a897616b7925"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183: exit status 2 (320.856682ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-748183 logs -n 25: (1.119262073s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ stop    │ -p embed-certs-748183 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p old-k8s-version-054159                                                                                                                                                                                                                     │ old-k8s-version-054159       │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:36 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ embed-certs-748183 image list --format=json                                                                                                                                                                                                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p embed-certs-748183 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.634797  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:46.133702  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:45.684695  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.183904  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.633463  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:49.633961  328990 pod_ready.go:94] pod "coredns-66bc5c9577-vpq66" is "Ready"
	I1102 13:37:49.633983  328990 pod_ready.go:86] duration metric: took 36.006114822s for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.636373  328990 pod_ready.go:83] waiting for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.640305  328990 pod_ready.go:94] pod "etcd-embed-certs-748183" is "Ready"
	I1102 13:37:49.640326  328990 pod_ready.go:86] duration metric: took 3.933112ms for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.642169  328990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.645917  328990 pod_ready.go:94] pod "kube-apiserver-embed-certs-748183" is "Ready"
	I1102 13:37:49.645933  328990 pod_ready.go:86] duration metric: took 3.743148ms for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.647713  328990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.832391  328990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748183" is "Ready"
	I1102 13:37:49.832415  328990 pod_ready.go:86] duration metric: took 184.682932ms for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.032477  328990 pod_ready.go:83] waiting for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.432219  328990 pod_ready.go:94] pod "kube-proxy-pg8nt" is "Ready"
	I1102 13:37:50.432252  328990 pod_ready.go:86] duration metric: took 399.749991ms for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.632021  328990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032263  328990 pod_ready.go:94] pod "kube-scheduler-embed-certs-748183" is "Ready"
	I1102 13:37:51.032285  328990 pod_ready.go:86] duration metric: took 400.23928ms for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032297  328990 pod_ready.go:40] duration metric: took 37.407986415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:51.078471  328990 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:51.080252  328990 out.go:179] * Done! kubectl is now configured to use "embed-certs-748183" cluster and "default" namespace by default
	W1102 13:37:50.684482  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:52.684813  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:55.183972  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:57.684208  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:00.183283  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:02.184008  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.67936101Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.6828204Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 13:37:23 embed-certs-748183 crio[591]: time="2025-11-02T13:37:23.68284281Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.924838148Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4648dd56-fe23-4cd5-8603-aaed5ee411ee name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.929322765Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5bb1e3a2-2aa9-4f96-bcef-be824010bd65 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.935445015Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=00a5407d-652e-4074-b405-867c18c5e51d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.935681963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.947008076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.947590886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.995977446Z" level=info msg="Created container f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=00a5407d-652e-4074-b405-867c18c5e51d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.996702371Z" level=info msg="Starting container: f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71" id=b383bc96-a6ca-4a11-9416-dd30864d1410 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:36 embed-certs-748183 crio[591]: time="2025-11-02T13:37:36.999049212Z" level=info msg="Started container" PID=1771 containerID=f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper id=b383bc96-a6ca-4a11-9416-dd30864d1410 name=/runtime.v1.RuntimeService/StartContainer sandboxID=473df30c6502e80b8647383ea9b909db07a669059203c34b3cd74af5f9fb65fb
	Nov 02 13:37:38 embed-certs-748183 crio[591]: time="2025-11-02T13:37:38.029620485Z" level=info msg="Removing container: c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623" id=7ff4b281-32b7-43ae-a50c-6d8009560e9f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:38 embed-certs-748183 crio[591]: time="2025-11-02T13:37:38.041699557Z" level=info msg="Removed container c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx/dashboard-metrics-scraper" id=7ff4b281-32b7-43ae-a50c-6d8009560e9f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.047272757Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2046be2-b0c3-494b-90c8-583406e135f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.048224203Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a98d6694-eb56-41b7-8793-d967514083fa name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.049336053Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=22d754af-87b8-4df2-ad38-b9d34fd71d2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.049468452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.053835385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054010783Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/64135ffa976b4acda2936164de473d3e1816f7b236152f11748b19c1ae9da0e7/merged/etc/passwd: no such file or directory"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054038307Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/64135ffa976b4acda2936164de473d3e1816f7b236152f11748b19c1ae9da0e7/merged/etc/group: no such file or directory"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.054269804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.080824687Z" level=info msg="Created container 99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b: kube-system/storage-provisioner/storage-provisioner" id=22d754af-87b8-4df2-ad38-b9d34fd71d2e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.081429757Z" level=info msg="Starting container: 99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b" id=64c1ec71-00b0-4df7-bd5a-2f1f8611805d name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:37:44 embed-certs-748183 crio[591]: time="2025-11-02T13:37:44.083115071Z" level=info msg="Started container" PID=1789 containerID=99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b description=kube-system/storage-provisioner/storage-provisioner id=64c1ec71-00b0-4df7-bd5a-2f1f8611805d name=/runtime.v1.RuntimeService/StartContainer sandboxID=190fdd55a8ebba440ab40f8474250de33a900831867c4033e44fed5135587019
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	99a40572d11f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   190fdd55a8ebb       storage-provisioner                          kube-system
	f64639b390fd2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   473df30c6502e       dashboard-metrics-scraper-6ffb444bf9-p8zfx   kubernetes-dashboard
	1e7e496e5f29b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   1615cdf662002       kubernetes-dashboard-855c9754f9-t4hjh        kubernetes-dashboard
	8b58b034d001e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   de4960307c563       coredns-66bc5c9577-vpq66                     kube-system
	9f476797359bc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   dc17844db4ac6       busybox                                      default
	d9bd80a8cd406       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   e49a45b27e948       kube-proxy-pg8nt                             kube-system
	e8c35dcf7d68a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   ca7797c1846b3       kindnet-9zwww                                kube-system
	08ed3a888e107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   190fdd55a8ebb       storage-provisioner                          kube-system
	92c81ac32663f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   d98bc50d3d88f       kube-apiserver-embed-certs-748183            kube-system
	915e447acc04f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   d2abdcb5e9d65       kube-scheduler-embed-certs-748183            kube-system
	7ce1beed8bfec       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   2f8d17544e010       etcd-embed-certs-748183                      kube-system
	4f580374d7075       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   fe927e35d3e0f       kube-controller-manager-embed-certs-748183   kube-system
	
	
	==> coredns [8b58b034d001ed44effae858626302ae16cc57c4e26297e53e6cb6b96e66cf48] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47051 - 11717 "HINFO IN 942979073962769649.6887173836007951737. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.016157486s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-748183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-748183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-748183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-748183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:38:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:37:43 +0000   Sun, 02 Nov 2025 13:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-748183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                b44fc6a8-f48d-4728-a7f6-4178f12db103
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-vpq66                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-748183                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-9zwww                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-748183             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-748183    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-pg8nt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-748183             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p8zfx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-t4hjh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-748183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-748183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-748183 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-748183 event: Registered Node embed-certs-748183 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-748183 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node embed-certs-748183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node embed-certs-748183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node embed-certs-748183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-748183 event: Registered Node embed-certs-748183 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [7ce1beed8bfeca2e3dbe79de858297d5596eb32ea1a78ba33516e86fff957e00] <==
	{"level":"warn","ts":"2025-11-02T13:37:11.373578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.379660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.388211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.394520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.401334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.407617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.413643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.420160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.428198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.434798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.445702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.451850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.459824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.467229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.477963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.484145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.490901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.497064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.503824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.510206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.517113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.536312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.542550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.548910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:11.602617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44382","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:38:08 up  1:20,  0 user,  load average: 2.63, 3.72, 2.63
	Linux embed-certs-748183 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e8c35dcf7d68a661284766a04b37fc308886ce23eb03d3449f58204b53949056] <==
	I1102 13:37:13.556755       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:13.556994       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1102 13:37:13.557148       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:13.557163       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:13.557187       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:13.662743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:13.662786       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:13.662808       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:13.663156       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:13.965036       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:13.965061       1 metrics.go:72] Registering metrics
	I1102 13:37:13.965127       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:23.663295       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:23.663342       1 main.go:301] handling current node
	I1102 13:37:33.667061       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:33.667092       1 main.go:301] handling current node
	I1102 13:37:43.663386       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:43.663417       1 main.go:301] handling current node
	I1102 13:37:53.666419       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:37:53.666450       1 main.go:301] handling current node
	I1102 13:38:03.668137       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1102 13:38:03.668172       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92c81ac32663feb2e55e81de4aea9ec83b4adedd0494edb88c83e13189d4ab75] <==
	I1102 13:37:12.091040       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 13:37:12.091051       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:12.091097       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 13:37:12.091112       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 13:37:12.091734       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 13:37:12.091076       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 13:37:12.092209       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:12.096959       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 13:37:12.098430       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:37:12.107667       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 13:37:12.116035       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1102 13:37:12.116064       1 policy_source.go:240] refreshing policies
	I1102 13:37:12.116192       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:12.365658       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:12.394864       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:12.412168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:12.418846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:12.425086       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:12.455805       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.36.46"}
	I1102 13:37:12.475649       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.201.208"}
	I1102 13:37:12.992830       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:14.971855       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:14.971900       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:15.371875       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:37:15.571558       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f580374d707565df73a17f079d127e0b80c61ce6670bb6a10a142440e8d5a5a] <==
	I1102 13:37:14.945702       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:14.947863       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:37:14.950155       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:37:14.951316       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:14.967711       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 13:37:14.967736       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:37:14.967793       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:37:14.967849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 13:37:14.967851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:37:14.967853       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:37:14.968180       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:14.967852       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:37:14.969520       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:37:14.969648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 13:37:14.973858       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:14.973872       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:14.973877       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:14.976027       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:14.978461       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:14.980609       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 13:37:14.983136       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:14.986129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:37:14.987625       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:37:14.990010       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:37:14.995692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d9bd80a8cd406f1eb033d3ba6453e88c337437c7215205f73e35e0729b0a960e] <==
	I1102 13:37:13.323661       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:13.385282       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:13.485504       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:13.485545       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1102 13:37:13.485704       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:13.503650       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:13.503696       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:13.508947       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:13.509416       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:13.509436       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:13.510867       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:13.510893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:13.510916       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:13.510922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:13.510926       1 config.go:200] "Starting service config controller"
	I1102 13:37:13.510945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:13.510934       1 config.go:309] "Starting node config controller"
	I1102 13:37:13.510970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:13.510977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:13.611858       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:13.611900       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:37:13.611914       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [915e447acc04f2663378328784e388e7b53096e05c75aacb4faa06eac072d743] <==
	I1102 13:37:11.237163       1 serving.go:386] Generated self-signed cert in-memory
	I1102 13:37:12.055080       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:12.055122       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:12.062741       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 13:37:12.062785       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 13:37:12.062809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:12.062831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.062834       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:12.062840       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.063368       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:12.063483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:37:12.163543       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:37:12.163559       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 13:37:12.163554       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:15 embed-certs-748183 kubelet[746]: I1102 13:37:15.541914     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78st7\" (UniqueName: \"kubernetes.io/projected/5163c067-aafb-41eb-bfce-05f4754d5cbc-kube-api-access-78st7\") pod \"kubernetes-dashboard-855c9754f9-t4hjh\" (UID: \"5163c067-aafb-41eb-bfce-05f4754d5cbc\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t4hjh"
	Nov 02 13:37:15 embed-certs-748183 kubelet[746]: I1102 13:37:15.541959     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-p8zfx\" (UID: \"8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx"
	Nov 02 13:37:17 embed-certs-748183 kubelet[746]: I1102 13:37:17.959759     746 scope.go:117] "RemoveContainer" containerID="e7358d22aa329f0e59f233488a2023262be71f5a37cf71c64dfe24eaf731c9c6"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: I1102 13:37:18.965107     746 scope.go:117] "RemoveContainer" containerID="e7358d22aa329f0e59f233488a2023262be71f5a37cf71c64dfe24eaf731c9c6"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: I1102 13:37:18.965242     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:18 embed-certs-748183 kubelet[746]: E1102 13:37:18.965446     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: I1102 13:37:19.168761     746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: I1102 13:37:19.968937     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:19 embed-certs-748183 kubelet[746]: E1102 13:37:19.969131     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:21 embed-certs-748183 kubelet[746]: I1102 13:37:21.985632     746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-t4hjh" podStartSLOduration=1.598274577 podStartE2EDuration="6.985611896s" podCreationTimestamp="2025-11-02 13:37:15 +0000 UTC" firstStartedPulling="2025-11-02 13:37:15.786075234 +0000 UTC m=+5.952860109" lastFinishedPulling="2025-11-02 13:37:21.173412552 +0000 UTC m=+11.340197428" observedRunningTime="2025-11-02 13:37:21.985491841 +0000 UTC m=+12.152276715" watchObservedRunningTime="2025-11-02 13:37:21.985611896 +0000 UTC m=+12.152396757"
	Nov 02 13:37:24 embed-certs-748183 kubelet[746]: I1102 13:37:24.964699     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:24 embed-certs-748183 kubelet[746]: E1102 13:37:24.964879     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:36 embed-certs-748183 kubelet[746]: I1102 13:37:36.923251     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: I1102 13:37:38.027476     746 scope.go:117] "RemoveContainer" containerID="c5c9312876a03f51e38bbf810867683cc790a29da89e33abd8e84f66f3e83623"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: I1102 13:37:38.027728     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:38 embed-certs-748183 kubelet[746]: E1102 13:37:38.027944     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: I1102 13:37:44.046890     746 scope.go:117] "RemoveContainer" containerID="08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: I1102 13:37:44.965670     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:44 embed-certs-748183 kubelet[746]: E1102 13:37:44.965849     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:37:56 embed-certs-748183 kubelet[746]: I1102 13:37:56.923476     746 scope.go:117] "RemoveContainer" containerID="f64639b390fd2c2d585ad67a3560e64fc61d1eef15f200e921da796aa9401f71"
	Nov 02 13:37:56 embed-certs-748183 kubelet[746]: E1102 13:37:56.924002     746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8zfx_kubernetes-dashboard(8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8zfx" podUID="8fc4a3a2-9ab6-4027-800f-e0cf1a9a9c5f"
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:38:03 embed-certs-748183 systemd[1]: kubelet.service: Consumed 1.691s CPU time.
	
	
	==> kubernetes-dashboard [1e7e496e5f29b31984c3f1c59eaba41bdd280208ffae335779ddf825b58a686e] <==
	2025/11/02 13:37:21 Starting overwatch
	2025/11/02 13:37:21 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:21 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:21 Using secret token for csrf signing
	2025/11/02 13:37:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:21 Generating JWE encryption key
	2025/11/02 13:37:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:21 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:21 Creating in-cluster Sidecar client
	2025/11/02 13:37:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:21 Serving insecurely on HTTP port: 9090
	2025/11/02 13:37:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [08ed3a888e10792c720b91d4af71d51d5756b14ec6e8b23bd5574eacf0dd9cfe] <==
	I1102 13:37:13.287494       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:37:43.289667       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [99a40572d11f3d6b6b87f71d288d2c4941a7be022c2cb33c0e2b50e99e81368b] <==
	I1102 13:37:44.094850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:37:44.101906       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:37:44.101953       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:37:44.104181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:47.559585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:51.819535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:55.418296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:37:58.472348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.494230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.498561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:01.498723       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:38:01.498849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876!
	I1102 13:38:01.498854       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5836755-5bb9-4f0c-9c57-d7cfd1b93802", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876 became leader
	W1102 13:38:01.500723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:01.504324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:01.599120       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-748183_8420181f-fc0b-4799-a9f2-de18cbc5f876!
	W1102 13:38:03.507289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:03.510932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:05.514083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:05.518508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:07.521186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:07.525268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748183 -n embed-certs-748183
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748183 -n embed-certs-748183: exit status 2 (324.295819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-748183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-538419 --alsologtostderr -v=1
E1102 13:38:20.032596   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.039004   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.051121   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.072909   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.114306   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.195731   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.357449   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:20.679179   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:38:21.321399   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-538419 --alsologtostderr -v=1: exit status 80 (2.167914647s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-538419 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:38:19.853180  344658 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:38:19.853472  344658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:19.853484  344658 out.go:374] Setting ErrFile to fd 2...
	I1102 13:38:19.853490  344658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:19.853768  344658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:38:19.854038  344658 out.go:368] Setting JSON to false
	I1102 13:38:19.854079  344658 mustload.go:66] Loading cluster: default-k8s-diff-port-538419
	I1102 13:38:19.854458  344658 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:38:19.854880  344658 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:38:19.872368  344658 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:38:19.872703  344658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:38:19.927754  344658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-02 13:38:19.916385016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:38:19.928448  344658 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-538419 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 13:38:19.930216  344658 out.go:179] * Pausing node default-k8s-diff-port-538419 ... 
	I1102 13:38:19.931528  344658 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:38:19.931821  344658 ssh_runner.go:195] Run: systemctl --version
	I1102 13:38:19.931865  344658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:38:19.950024  344658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:38:20.048817  344658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.061617  344658 pause.go:52] kubelet running: true
	I1102 13:38:20.061675  344658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:20.214602  344658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:20.214702  344658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:20.279838  344658 cri.go:89] found id: "b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0"
	I1102 13:38:20.279865  344658 cri.go:89] found id: "c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee"
	I1102 13:38:20.279870  344658 cri.go:89] found id: "a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	I1102 13:38:20.279875  344658 cri.go:89] found id: "5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6"
	I1102 13:38:20.279878  344658 cri.go:89] found id: "9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f"
	I1102 13:38:20.279881  344658 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:38:20.279883  344658 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:38:20.279887  344658 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:38:20.279891  344658 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:38:20.279899  344658 cri.go:89] found id: "2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	I1102 13:38:20.279904  344658 cri.go:89] found id: "3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7"
	I1102 13:38:20.279908  344658 cri.go:89] found id: ""
	I1102 13:38:20.279953  344658 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:20.291078  344658 retry.go:31] will retry after 172.018948ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:20Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:20.463623  344658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.494415  344658 pause.go:52] kubelet running: false
	I1102 13:38:20.494510  344658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:20.631812  344658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:20.631898  344658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:20.698227  344658 cri.go:89] found id: "b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0"
	I1102 13:38:20.698248  344658 cri.go:89] found id: "c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee"
	I1102 13:38:20.698252  344658 cri.go:89] found id: "a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	I1102 13:38:20.698255  344658 cri.go:89] found id: "5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6"
	I1102 13:38:20.698258  344658 cri.go:89] found id: "9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f"
	I1102 13:38:20.698261  344658 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:38:20.698264  344658 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:38:20.698266  344658 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:38:20.698268  344658 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:38:20.698273  344658 cri.go:89] found id: "2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	I1102 13:38:20.698276  344658 cri.go:89] found id: "3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7"
	I1102 13:38:20.698278  344658 cri.go:89] found id: ""
	I1102 13:38:20.698314  344658 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:20.709907  344658 retry.go:31] will retry after 217.702568ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:20Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:20.928408  344658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.941133  344658 pause.go:52] kubelet running: false
	I1102 13:38:20.941189  344658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:21.077809  344658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:21.077880  344658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:21.142626  344658 cri.go:89] found id: "b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0"
	I1102 13:38:21.142649  344658 cri.go:89] found id: "c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee"
	I1102 13:38:21.142653  344658 cri.go:89] found id: "a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	I1102 13:38:21.142656  344658 cri.go:89] found id: "5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6"
	I1102 13:38:21.142659  344658 cri.go:89] found id: "9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f"
	I1102 13:38:21.142663  344658 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:38:21.142665  344658 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:38:21.142668  344658 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:38:21.142671  344658 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:38:21.142682  344658 cri.go:89] found id: "2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	I1102 13:38:21.142685  344658 cri.go:89] found id: "3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7"
	I1102 13:38:21.142688  344658 cri.go:89] found id: ""
	I1102 13:38:21.142726  344658 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:21.154899  344658 retry.go:31] will retry after 569.847493ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:21Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:38:21.725774  344658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:21.738289  344658 pause.go:52] kubelet running: false
	I1102 13:38:21.738343  344658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 13:38:21.873751  344658 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 13:38:21.873815  344658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 13:38:21.940457  344658 cri.go:89] found id: "b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0"
	I1102 13:38:21.940486  344658 cri.go:89] found id: "c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee"
	I1102 13:38:21.940490  344658 cri.go:89] found id: "a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	I1102 13:38:21.940494  344658 cri.go:89] found id: "5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6"
	I1102 13:38:21.940496  344658 cri.go:89] found id: "9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f"
	I1102 13:38:21.940499  344658 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:38:21.940502  344658 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:38:21.940504  344658 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:38:21.940506  344658 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:38:21.940524  344658 cri.go:89] found id: "2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	I1102 13:38:21.940529  344658 cri.go:89] found id: "3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7"
	I1102 13:38:21.940532  344658 cri.go:89] found id: ""
	I1102 13:38:21.940596  344658 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:38:21.953997  344658 out.go:203] 
	W1102 13:38:21.955302  344658 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:38:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:38:21.955328  344658 out.go:285] * 
	* 
	W1102 13:38:21.959575  344658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:38:21.960963  344658 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-538419 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538419
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538419:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	        "Created": "2025-11-02T13:36:10.354191788Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:19.830426571Z",
	            "FinishedAt": "2025-11-02T13:37:18.25192093Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hosts",
	        "LogPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2-json.log",
	        "Name": "/default-k8s-diff-port-538419",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538419:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-538419",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	                "LowerDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538419",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538419/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538419",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46ae7c6859991c3a2bfea89e94d77e2a96bb8ed98c4ee7b5a9438d25bbb5dbdf",
	            "SandboxKey": "/var/run/docker/netns/46ae7c685999",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538419": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:00:eb:8e:27:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a5177e2530dcf8dba1a46a1c8708fe51c8cc64912038433c6196e6d34da5a5b",
	                    "EndpointID": "04cb658b4ab413819cdc3d19af55f110303d8a62bc38f10d57392a6edcd91621",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538419",
	                        "922c5d262078"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419: exit status 2 (314.479852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25
E1102 13:38:22.602740   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25: (1.060143102s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ embed-certs-748183 image list --format=json                                                                                                                                                                                                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p embed-certs-748183 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	│ delete  │ -p embed-certs-748183                                                                                                                                                                                                                         │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ delete  │ -p embed-certs-748183                                                                                                                                                                                                                         │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ image   │ default-k8s-diff-port-538419 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p default-k8s-diff-port-538419 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.634797  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:46.133702  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:45.684695  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.183904  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.633463  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:49.633961  328990 pod_ready.go:94] pod "coredns-66bc5c9577-vpq66" is "Ready"
	I1102 13:37:49.633983  328990 pod_ready.go:86] duration metric: took 36.006114822s for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.636373  328990 pod_ready.go:83] waiting for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.640305  328990 pod_ready.go:94] pod "etcd-embed-certs-748183" is "Ready"
	I1102 13:37:49.640326  328990 pod_ready.go:86] duration metric: took 3.933112ms for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.642169  328990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.645917  328990 pod_ready.go:94] pod "kube-apiserver-embed-certs-748183" is "Ready"
	I1102 13:37:49.645933  328990 pod_ready.go:86] duration metric: took 3.743148ms for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.647713  328990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.832391  328990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748183" is "Ready"
	I1102 13:37:49.832415  328990 pod_ready.go:86] duration metric: took 184.682932ms for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.032477  328990 pod_ready.go:83] waiting for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.432219  328990 pod_ready.go:94] pod "kube-proxy-pg8nt" is "Ready"
	I1102 13:37:50.432252  328990 pod_ready.go:86] duration metric: took 399.749991ms for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.632021  328990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032263  328990 pod_ready.go:94] pod "kube-scheduler-embed-certs-748183" is "Ready"
	I1102 13:37:51.032285  328990 pod_ready.go:86] duration metric: took 400.23928ms for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032297  328990 pod_ready.go:40] duration metric: took 37.407986415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:51.078471  328990 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:51.080252  328990 out.go:179] * Done! kubectl is now configured to use "embed-certs-748183" cluster and "default" namespace by default
	W1102 13:37:50.684482  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:52.684813  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:55.183972  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:57.684208  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:00.183283  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:02.184008  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:04.683177  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	I1102 13:38:06.683209  333276 pod_ready.go:94] pod "coredns-66bc5c9577-4xsxx" is "Ready"
	I1102 13:38:06.683235  333276 pod_ready.go:86] duration metric: took 35.504872374s for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.686499  333276 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.690658  333276 pod_ready.go:94] pod "etcd-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.690683  333276 pod_ready.go:86] duration metric: took 4.162031ms for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.692830  333276 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.696597  333276 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.696620  333276 pod_ready.go:86] duration metric: took 3.762714ms for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.698448  333276 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.881706  333276 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.881742  333276 pod_ready.go:86] duration metric: took 183.271121ms for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.082248  333276 pod_ready.go:83] waiting for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.481632  333276 pod_ready.go:94] pod "kube-proxy-nnhqs" is "Ready"
	I1102 13:38:07.481661  333276 pod_ready.go:86] duration metric: took 399.382528ms for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.682180  333276 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:08.081746  333276 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:08.081771  333276 pod_ready.go:86] duration metric: took 399.564273ms for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:08.081786  333276 pod_ready.go:40] duration metric: took 36.907651629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:38:08.128554  333276 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:38:08.130999  333276 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-538419" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.589229473Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.824762566Z" level=info msg="Removing container: 902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6" id=210e6115-4dd6-4c45-9410-2334d2ff067c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.834181029Z" level=info msg="Removed container 902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=210e6115-4dd6-4c45-9410-2334d2ff067c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.874972471Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25e86424-b701-4acc-a6ea-492936b082bf name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.875971928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fa22bf2-7a05-42cb-87c4-6807198ec69a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.877037252Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=400be694-030c-4dce-9aef-5e14529ac869 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.877180936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.88288817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.88302266Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/50bd5454a23bc96cf7d6acf7e16a4b51c988f578967ca4ff6fe0c9ceadafac99/merged/etc/passwd: no such file or directory"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.883043132Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/50bd5454a23bc96cf7d6acf7e16a4b51c988f578967ca4ff6fe0c9ceadafac99/merged/etc/group: no such file or directory"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.883249721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.911110496Z" level=info msg="Created container b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0: kube-system/storage-provisioner/storage-provisioner" id=400be694-030c-4dce-9aef-5e14529ac869 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.911721096Z" level=info msg="Starting container: b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0" id=8240f358-868c-4549-bfbf-f05257eb4ae3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.913608152Z" level=info msg="Started container" PID=1784 containerID=b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0 description=kube-system/storage-provisioner/storage-provisioner id=8240f358-868c-4549-bfbf-f05257eb4ae3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b56c51c10c9eaed681ff6242fa8d278869d1009d24a551d25dac01cbd38df896
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.756238558Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed552145-3791-40a9-bc5a-1fbfaf874af9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.757138337Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c3fd92e6-ecab-4384-ba22-d3010fe94f35 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.758085004Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=fa0346dc-9c59-4999-b308-96612f770f05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.758206888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.763709034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.764481647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.804393093Z" level=info msg="Created container 2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=fa0346dc-9c59-4999-b308-96612f770f05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.805085769Z" level=info msg="Starting container: 2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2" id=a9307887-512c-4966-aad8-9ad8b9380816 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.80716673Z" level=info msg="Started container" PID=1800 containerID=2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper id=a9307887-512c-4966-aad8-9ad8b9380816 name=/runtime.v1.RuntimeService/StartContainer sandboxID=04a715c66e00e0a6dbab16a090dfd35972bab8d6acb440739d246e62bbfd837d
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.885837048Z" level=info msg="Removing container: 667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561" id=8d2f61dd-8b82-48c5-b7d9-8a9f712b3e38 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.897773788Z" level=info msg="Removed container 667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=8d2f61dd-8b82-48c5-b7d9-8a9f712b3e38 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2060b3aa9bf65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   04a715c66e00e       dashboard-metrics-scraper-6ffb444bf9-98t5k             kubernetes-dashboard
	b5f1c0f89cbd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b56c51c10c9ea       storage-provisioner                                    kube-system
	3b4d565f2df6b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   1cd27e51b91d3       kubernetes-dashboard-855c9754f9-zcdhn                  kubernetes-dashboard
	c9b5ad92438bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   5a77fb4aae810       coredns-66bc5c9577-4xsxx                               kube-system
	77e108f874417       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   4578bbab212f8       busybox                                                default
	a1deaef6b0856       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   b56c51c10c9ea       storage-provisioner                                    kube-system
	5893cf1512ee0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   d91ae740f3506       kube-proxy-nnhqs                                       kube-system
	9fe26d5a73cb2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   3b89bde86bdbc       kindnet-gc6n2                                          kube-system
	9c0a5c5252f4d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   2f6f9e440309d       etcd-default-k8s-diff-port-538419                      kube-system
	4b0ca32f1b94d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   1a42344b48320       kube-scheduler-default-k8s-diff-port-538419            kube-system
	59c16f4262360       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   11ec43a7f2236       kube-controller-manager-default-k8s-diff-port-538419   kube-system
	9d75eaf3dc03d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   fd4560b819579       kube-apiserver-default-k8s-diff-port-538419            kube-system
	
	
	==> coredns [c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52831 - 62528 "HINFO IN 4975869981560521564.5184462275221150874. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067480545s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-538419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-538419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538419
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-538419
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a8e8c9a3-24d1-4403-8143-5254b74d1185
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-4xsxx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-538419                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-gc6n2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538419             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538419    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-nnhqs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538419             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98t5k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zcdhn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-538419 event: Registered Node default-k8s-diff-port-538419 in Controller
	  Normal  NodeReady                99s                  kubelet          Node default-k8s-diff-port-538419 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node default-k8s-diff-port-538419 event: Registered Node default-k8s-diff-port-538419 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc] <==
	{"level":"warn","ts":"2025-11-02T13:37:28.713852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:49164: read: connection reset by peer"}
	{"level":"warn","ts":"2025-11-02T13:37:28.725303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.734810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.747052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.757952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.767704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.778291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.785963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.799589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.808196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.816700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.825068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.832820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.842053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.864614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.872605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.884113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.888179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.897443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.904797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.925332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.933169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.941393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.986475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49548","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T13:37:41.947832Z","caller":"traceutil/trace.go:172","msg":"trace[1515664450] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"114.480878ms","start":"2025-11-02T13:37:41.833316Z","end":"2025-11-02T13:37:41.947797Z","steps":["trace[1515664450] 'process raft request'  (duration: 112.102393ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:38:23 up  1:20,  0 user,  load average: 2.13, 3.55, 2.59
	Linux default-k8s-diff-port-538419 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f] <==
	I1102 13:37:30.364425       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:30.364701       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 13:37:30.364874       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:30.364895       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:30.364919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:30.569964       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:30.570027       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:30.570037       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:30.570181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:30.972314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:30.972349       1 metrics.go:72] Registering metrics
	I1102 13:37:30.972417       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:40.570097       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:37:40.570176       1 main.go:301] handling current node
	I1102 13:37:50.572115       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:37:50.572174       1 main.go:301] handling current node
	I1102 13:38:00.569637       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:00.569671       1 main.go:301] handling current node
	I1102 13:38:10.569765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:10.569802       1 main.go:301] handling current node
	I1102 13:38:20.577873       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:20.577904       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3] <==
	I1102 13:37:29.513992       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:29.516502       1 aggregator.go:171] initial CRD sync complete...
	I1102 13:37:29.516559       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 13:37:29.516603       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:37:29.516629       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:37:29.516885       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:29.517508       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1102 13:37:29.521867       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:37:29.532022       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:37:29.542005       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:29.545236       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 13:37:29.558433       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1102 13:37:29.558541       1 policy_source.go:240] refreshing policies
	I1102 13:37:29.613415       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:29.882354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:29.892631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:29.926003       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:29.946880       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:29.960932       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:30.000241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.91.164"}
	I1102 13:37:30.012349       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.253.253"}
	I1102 13:37:30.413179       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:33.265835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:37:33.317096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:33.367607       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae] <==
	I1102 13:37:32.847152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:37:32.848345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:37:32.852553       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:37:32.854813       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:37:32.862315       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:37:32.862440       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:32.863462       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:37:32.863517       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:32.863533       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:32.863547       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:32.863558       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:32.863586       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:37:32.863655       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:37:32.864022       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:32.865951       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:37:32.868254       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:32.868358       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:32.869483       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:37:32.871830       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 13:37:32.873191       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 13:37:32.873291       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 13:37:32.873388       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-538419"
	I1102 13:37:32.873437       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 13:37:32.877296       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:32.881520       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6] <==
	I1102 13:37:30.178314       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:30.245710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:30.346192       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:30.346254       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 13:37:30.346821       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:30.366049       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:30.366103       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:30.371161       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:30.371523       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:30.371538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:30.372597       1 config.go:200] "Starting service config controller"
	I1102 13:37:30.372657       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:30.372672       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:30.372739       1 config.go:309] "Starting node config controller"
	I1102 13:37:30.372749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:30.372756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:30.372763       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:30.372777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:30.372659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:30.472825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:30.474453       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:37:30.474457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca] <==
	I1102 13:37:28.251412       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:37:29.480654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:37:29.480690       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:37:29.480701       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:37:29.480710       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:37:29.520723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:29.520855       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:29.525635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:29.525718       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:29.526334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:29.526421       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:37:29.625899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500371     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl58\" (UniqueName: \"kubernetes.io/projected/a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d-kube-api-access-fjl58\") pod \"kubernetes-dashboard-855c9754f9-zcdhn\" (UID: \"a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn"
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500423     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8caa2918-0909-4d44-b89f-b91d119bf2dc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-98t5k\" (UID: \"8caa2918-0909-4d44-b89f-b91d119bf2dc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k"
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500631     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zcdhn\" (UID: \"a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn"
	Nov 02 13:37:36 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:36.179540     751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 13:37:36 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:36.823070     751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn" podStartSLOduration=0.916690371 podStartE2EDuration="3.823041771s" podCreationTimestamp="2025-11-02 13:37:33 +0000 UTC" firstStartedPulling="2025-11-02 13:37:33.763077882 +0000 UTC m=+7.107039008" lastFinishedPulling="2025-11-02 13:37:36.66942926 +0000 UTC m=+10.013390408" observedRunningTime="2025-11-02 13:37:36.822996341 +0000 UTC m=+10.166957487" watchObservedRunningTime="2025-11-02 13:37:36.823041771 +0000 UTC m=+10.167002916"
	Nov 02 13:37:39 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:39.819578     751 scope.go:117] "RemoveContainer" containerID="902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:40.823358     751 scope.go:117] "RemoveContainer" containerID="902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:40.823502     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:40.823769     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:37:41 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:41.827068     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:41 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:41.827287     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:37:49 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:49.468970     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:49 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:49.469202     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:00 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:00.874515     751 scope.go:117] "RemoveContainer" containerID="a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.755753     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.884224     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.884466     751 scope.go:117] "RemoveContainer" containerID="2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: E1102 13:38:02.884697     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:09 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:09.468908     751 scope.go:117] "RemoveContainer" containerID="2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	Nov 02 13:38:09 default-k8s-diff-port-538419 kubelet[751]: E1102 13:38:09.469144     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:38:20 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:20.195469     751 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7] <==
	2025/11/02 13:37:36 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:36 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:36 Using secret token for csrf signing
	2025/11/02 13:37:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:36 Generating JWE encryption key
	2025/11/02 13:37:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:37 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:37 Creating in-cluster Sidecar client
	2025/11/02 13:37:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:37 Serving insecurely on HTTP port: 9090
	2025/11/02 13:38:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:36 Starting overwatch
	
	
	==> storage-provisioner [a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e] <==
	I1102 13:37:30.142705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:38:00.145516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0] <==
	I1102 13:38:00.925554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:38:00.931882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:38:00.931913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:38:00.933893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:04.389170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:08.650508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:12.249267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:15.302207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.324610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.328838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:18.328995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:38:18.329068       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"030faaca-fc27-4b34-be7e-e6cc7b667e6a", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b became leader
	I1102 13:38:18.329141       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b!
	W1102 13:38:18.330818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.334937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:18.429365       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b!
	W1102 13:38:20.338013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:20.341820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:22.345616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:22.350378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419: exit status 2 (318.045662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538419
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538419:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	        "Created": "2025-11-02T13:36:10.354191788Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:37:19.830426571Z",
	            "FinishedAt": "2025-11-02T13:37:18.25192093Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/hosts",
	        "LogPath": "/var/lib/docker/containers/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2/922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2-json.log",
	        "Name": "/default-k8s-diff-port-538419",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538419:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-538419",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "922c5d262078e23f508f8c61278ded46b66d9bd0cb704eb7b2231a7207cf65d2",
	                "LowerDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555-init/diff:/var/lib/docker/overlay2/ade8b3ae4211ec2351054b079fbe16c9d5098767da7f3b4690cc326a715fa02e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d5dae028c5e6f1bfeeb51a794171baafb7207f6ffcea4fa7a391f6472e77555/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538419",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538419/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538419",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538419",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "46ae7c6859991c3a2bfea89e94d77e2a96bb8ed98c4ee7b5a9438d25bbb5dbdf",
	            "SandboxKey": "/var/run/docker/netns/46ae7c685999",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538419": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:00:eb:8e:27:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a5177e2530dcf8dba1a46a1c8708fe51c8cc64912038433c6196e6d34da5a5b",
	                    "EndpointID": "04cb658b4ab413819cdc3d19af55f110303d8a62bc38f10d57392a6edcd91621",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538419",
	                        "922c5d262078"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419: exit status 2 (314.704965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25
E1102 13:38:25.165065   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/calico-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-538419 logs -n 25: (1.054659527s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538419 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable metrics-server -p newest-cni-066482 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ stop    │ -p newest-cni-066482 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:38 UTC │
	│ addons  │ enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ start   │ -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ newest-cni-066482 image list --format=json                                                                                                                                                                                                    │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p newest-cni-066482 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p newest-cni-066482                                                                                                                                                                                                                          │ newest-cni-066482            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ no-preload-978795 image list --format=json                                                                                                                                                                                                    │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ pause   │ -p no-preload-978795 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │                     │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ delete  │ -p no-preload-978795                                                                                                                                                                                                                          │ no-preload-978795            │ jenkins │ v1.37.0 │ 02 Nov 25 13:37 UTC │ 02 Nov 25 13:37 UTC │
	│ image   │ embed-certs-748183 image list --format=json                                                                                                                                                                                                   │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p embed-certs-748183 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	│ delete  │ -p embed-certs-748183                                                                                                                                                                                                                         │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ delete  │ -p embed-certs-748183                                                                                                                                                                                                                         │ embed-certs-748183           │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ image   │ default-k8s-diff-port-538419 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │ 02 Nov 25 13:38 UTC │
	│ pause   │ -p default-k8s-diff-port-538419 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538419 │ jenkins │ v1.37.0 │ 02 Nov 25 13:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:37:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:37:20.524373  333962 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:37:20.524647  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524664  333962 out.go:374] Setting ErrFile to fd 2...
	I1102 13:37:20.524670  333962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:37:20.524846  333962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:37:20.525403  333962 out.go:368] Setting JSON to false
	I1102 13:37:20.526966  333962 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4793,"bootTime":1762085848,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:37:20.527085  333962 start.go:143] virtualization: kvm guest
	I1102 13:37:20.531180  333962 out.go:179] * [newest-cni-066482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:37:20.533535  333962 notify.go:221] Checking for updates...
	I1102 13:37:20.533705  333962 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:37:20.535165  333962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:37:20.536733  333962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:20.538369  333962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:37:20.539773  333962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:37:20.541014  333962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:37:20.543949  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:20.544901  333962 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:37:20.580929  333962 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:37:20.581269  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.677940  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.664880977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.678092  333962 docker.go:319] overlay module found
	I1102 13:37:20.686090  333962 out.go:179] * Using the docker driver based on existing profile
	I1102 13:37:20.689767  333962 start.go:309] selected driver: docker
	I1102 13:37:20.689788  333962 start.go:930] validating driver "docker" against &{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.689907  333962 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:37:20.690830  333962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:37:20.765132  333962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-02 13:37:20.75342287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:37:20.765679  333962 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:20.765731  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:20.765799  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:20.765881  333962 start.go:353] cluster config:
	{Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:20.825212  333962 out.go:179] * Starting "newest-cni-066482" primary control-plane node in "newest-cni-066482" cluster
	I1102 13:37:20.829240  333962 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:37:20.869092  333962 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:37:20.895924  333962 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:37:20.895925  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:20.896230  333962 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1102 13:37:20.896249  333962 cache.go:59] Caching tarball of preloaded images
	I1102 13:37:20.896370  333962 preload.go:233] Found /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1102 13:37:20.896389  333962 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:37:20.896531  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:20.923310  333962 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:37:20.923336  333962 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:37:20.923354  333962 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:37:20.923397  333962 start.go:360] acquireMachinesLock for newest-cni-066482: {Name:mk25ceca9700045fc79c727ac5793f50b1f35f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:37:20.923467  333962 start.go:364] duration metric: took 45.165µs to acquireMachinesLock for "newest-cni-066482"
	I1102 13:37:20.923495  333962 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:37:20.923507  333962 fix.go:54] fixHost starting: 
	I1102 13:37:20.923821  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:20.947956  333962 fix.go:112] recreateIfNeeded on newest-cni-066482: state=Stopped err=<nil>
	W1102 13:37:20.947991  333962 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 13:37:17.749910  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:19.754111  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:18.133437  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:20.135974  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:22.633523  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:19.800458  333276 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-538419" ...
	I1102 13:37:19.800582  333276 cli_runner.go:164] Run: docker start default-k8s-diff-port-538419
	I1102 13:37:20.258040  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:20.285518  333276 kic.go:430] container "default-k8s-diff-port-538419" state is running.
	I1102 13:37:20.285975  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:20.314790  333276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/config.json ...
	I1102 13:37:20.315668  333276 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:20.316243  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:20.344162  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:20.344635  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:20.344656  333276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:20.345938  333276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42554->127.0.0.1:33130: read: connection reset by peer
	I1102 13:37:23.485888  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.485911  333276 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538419"
	I1102 13:37:23.485968  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.504539  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.504787  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.504808  333276 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538419 && echo "default-k8s-diff-port-538419" | sudo tee /etc/hostname
	I1102 13:37:23.654299  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538419
	
	I1102 13:37:23.654392  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:23.673075  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:23.673329  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:23.673355  333276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:23.814290  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:23.814321  333276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:23.814341  333276 ubuntu.go:190] setting up certificates
	I1102 13:37:23.814351  333276 provision.go:84] configureAuth start
	I1102 13:37:23.814396  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:23.831955  333276 provision.go:143] copyHostCerts
	I1102 13:37:23.832026  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:23.832046  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:23.832132  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:23.832261  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:23.832273  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:23.832318  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:23.832420  333276 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:23.832433  333276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:23.832471  333276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:23.832546  333276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538419 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-538419 localhost minikube]
	I1102 13:37:24.219472  333276 provision.go:177] copyRemoteCerts
	I1102 13:37:24.219536  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.219587  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.237848  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.340891  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 13:37:24.358910  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:24.376167  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:24.393830  333276 provision.go:87] duration metric: took 579.46643ms to configureAuth
	I1102 13:37:24.393865  333276 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:24.394064  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:24.394157  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.412877  333276 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.413122  333276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33130 <nil> <nil>}
	I1102 13:37:24.413143  333276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:20.978818  333962 out.go:252] * Restarting existing docker container for "newest-cni-066482" ...
	I1102 13:37:20.978914  333962 cli_runner.go:164] Run: docker start newest-cni-066482
	I1102 13:37:21.270167  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:21.288682  333962 kic.go:430] container "newest-cni-066482" state is running.
	I1102 13:37:21.289009  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:21.309331  333962 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/config.json ...
	I1102 13:37:21.309611  333962 machine.go:94] provisionDockerMachine start ...
	I1102 13:37:21.309709  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:21.330053  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:21.330413  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:21.330432  333962 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:37:21.331174  333962 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55362->127.0.0.1:33135: read: connection reset by peer
	I1102 13:37:24.473386  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.473415  333962 ubuntu.go:182] provisioning hostname "newest-cni-066482"
	I1102 13:37:24.473479  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.491931  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.492137  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.492150  333962 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-066482 && echo "newest-cni-066482" | sudo tee /etc/hostname
	I1102 13:37:24.643677  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-066482
	
	I1102 13:37:24.643803  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.663238  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:24.663468  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:24.663495  333962 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-066482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-066482/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-066482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:37:24.810077  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:37:24.810117  333962 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-9416/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-9416/.minikube}
	I1102 13:37:24.810141  333962 ubuntu.go:190] setting up certificates
	I1102 13:37:24.810156  333962 provision.go:84] configureAuth start
	I1102 13:37:24.810212  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:24.827792  333962 provision.go:143] copyHostCerts
	I1102 13:37:24.827858  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem, removing ...
	I1102 13:37:24.827875  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem
	I1102 13:37:24.827953  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/ca.pem (1078 bytes)
	I1102 13:37:24.828150  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem, removing ...
	I1102 13:37:24.828164  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem
	I1102 13:37:24.828215  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/cert.pem (1123 bytes)
	I1102 13:37:24.828305  333962 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem, removing ...
	I1102 13:37:24.828317  333962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem
	I1102 13:37:24.828355  333962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-9416/.minikube/key.pem (1679 bytes)
	I1102 13:37:24.828426  333962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem org=jenkins.newest-cni-066482 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-066482]
	I1102 13:37:24.927237  333962 provision.go:177] copyRemoteCerts
	I1102 13:37:24.927289  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:37:24.927321  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:24.944584  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.045425  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:37:25.062863  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1102 13:37:25.080629  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:37:25.097296  333962 provision.go:87] duration metric: took 287.125327ms to configureAuth
	I1102 13:37:25.097332  333962 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:37:25.097535  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:25.097668  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.115731  333962 main.go:143] libmachine: Using SSH client type: native
	I1102 13:37:25.115937  333962 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1102 13:37:25.115955  333962 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:37:25.401017  333962 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:25.401045  333962 machine.go:97] duration metric: took 4.091415666s to provisionDockerMachine
	I1102 13:37:25.401058  333962 start.go:293] postStartSetup for "newest-cni-066482" (driver="docker")
	I1102 13:37:25.401071  333962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:25.401154  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:25.401203  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.420252  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.519659  333962 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:25.522994  333962 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:25.523015  333962 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:25.523025  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:25.523068  333962 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:25.523146  333962 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:25.523246  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.712619  333276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:37:24.712652  333276 machine.go:97] duration metric: took 4.396840284s to provisionDockerMachine
	I1102 13:37:24.712667  333276 start.go:293] postStartSetup for "default-k8s-diff-port-538419" (driver="docker")
	I1102 13:37:24.712682  333276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:37:24.712766  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:37:24.712819  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.733777  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.836037  333276 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:37:24.839702  333276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:37:24.839733  333276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:37:24.839744  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/addons for local assets ...
	I1102 13:37:24.839789  333276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-9416/.minikube/files for local assets ...
	I1102 13:37:24.839894  333276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem -> 129142.pem in /etc/ssl/certs
	I1102 13:37:24.840014  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 13:37:24.847534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:24.864718  333276 start.go:296] duration metric: took 152.035287ms for postStartSetup
	I1102 13:37:24.864791  333276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:24.864826  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:24.884885  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:24.983028  333276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:24.987641  333276 fix.go:56] duration metric: took 5.212515962s for fixHost
	I1102 13:37:24.987669  333276 start.go:83] releasing machines lock for "default-k8s-diff-port-538419", held for 5.212566618s
	I1102 13:37:24.987736  333276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538419
	I1102 13:37:25.007034  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.007083  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.007090  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.007125  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.007153  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.007176  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.007213  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.007274  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.007319  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:25.024428  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:25.135885  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.153535  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.171518  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.177840  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.186217  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190875  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.190931  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.225348  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.233857  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.242147  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245844  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.245889  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:25.282977  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:25.290988  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.299515  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303360  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.303415  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.338843  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.348256  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:25.352326  333276 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:25.357122  333276 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:25.357227  333276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:25.361283  333276 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:25.422770  333276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:25.458920  333276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:25.463750  333276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:25.463815  333276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:25.471852  333276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:25.471874  333276 start.go:496] detecting cgroup driver to use...
	I1102 13:37:25.471904  333276 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:25.471948  333276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:25.485878  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:25.497990  333276 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:25.498045  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:25.512402  333276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:25.525187  333276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:25.608539  333276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:25.688830  333276 docker.go:234] disabling docker service ...
	I1102 13:37:25.688921  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:25.705783  333276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:25.723506  333276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:25.813168  333276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:25.898289  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:25.910519  333276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:25.924524  333276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:25.924604  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.933372  333276 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:25.933426  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.942218  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.951107  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.959830  333276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:25.967946  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.977032  333276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.986463  333276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:25.995429  333276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.003006  333276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.010445  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.094219  333276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.215173  333276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.215239  333276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.219123  333276 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.219176  333276 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.222728  333276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.250907  333276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.250993  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.285974  333276 ssh_runner.go:195] Run: crio --version
	I1102 13:37:26.314527  333276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:25.531179  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.548059  333962 start.go:296] duration metric: took 146.985428ms for postStartSetup
	I1102 13:37:25.548168  333962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:37:25.548227  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.572631  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.670554  333962 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:37:25.674984  333962 fix.go:56] duration metric: took 4.751471621s for fixHost
	I1102 13:37:25.675009  333962 start.go:83] releasing machines lock for "newest-cni-066482", held for 4.751529653s
	I1102 13:37:25.675073  333962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-066482
	I1102 13:37:25.693462  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:25.693510  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:25.693517  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:25.693544  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:25.693612  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:25.693646  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:25.693704  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:25.693780  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:25.693820  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:25.715629  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:25.832398  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:25.854465  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:25.871731  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:25.877714  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:25.886048  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889747  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.889800  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:25.924157  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:25.932269  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:25.940725  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944474  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.944520  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:25.982544  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:25.991404  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:25.999821  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003838  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.003886  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:26.045614  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:26.054860  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:37:26.058745  333962 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:37:26.062392  333962 ssh_runner.go:195] Run: cat /version.json
	I1102 13:37:26.062503  333962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:37:26.066112  333962 ssh_runner.go:195] Run: systemctl --version
	I1102 13:37:26.127272  333962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:37:26.165639  333962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:37:26.170693  333962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:37:26.170747  333962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:37:26.179292  333962 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:37:26.179317  333962 start.go:496] detecting cgroup driver to use...
	I1102 13:37:26.179346  333962 detect.go:190] detected "systemd" cgroup driver on host os
	I1102 13:37:26.179401  333962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:37:26.194965  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:37:26.209348  333962 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:37:26.209406  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:37:26.224797  333962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:37:26.237179  333962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:37:26.329871  333962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:37:26.424322  333962 docker.go:234] disabling docker service ...
	I1102 13:37:26.424387  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:37:26.439911  333962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:37:26.453248  333962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:37:26.542141  333962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:37:26.630964  333962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:37:26.643532  333962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:37:26.658482  333962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:37:26.658590  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.668170  333962 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1102 13:37:26.668240  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.678403  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.687532  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.697557  333962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:37:26.707346  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.718538  333962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.729625  333962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:37:26.743583  333962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:37:26.753321  333962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:37:26.761369  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.839464  333962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:37:26.938004  333962 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:37:26.938073  333962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:37:26.942145  333962 start.go:564] Will wait 60s for crictl version
	I1102 13:37:26.942204  333962 ssh_runner.go:195] Run: which crictl
	I1102 13:37:26.946060  333962 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:37:26.972282  333962 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:37:26.972365  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.002057  333962 ssh_runner.go:195] Run: crio --version
	I1102 13:37:27.032337  333962 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:37:27.033686  333962 cli_runner.go:164] Run: docker network inspect newest-cni-066482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:27.051527  333962 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:27.055606  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.067494  333962 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 13:37:22.249113  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:24.748949  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:26.749600  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:26.315635  333276 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538419 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:37:26.333971  333276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 13:37:26.337905  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.348667  333276 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:26.348772  333276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:26.348822  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.387710  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.387730  333276 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:26.387777  333276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:26.413505  333276 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:26.413528  333276 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:26.413538  333276 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1102 13:37:26.413643  333276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:26.413707  333276 ssh_runner.go:195] Run: crio config
	I1102 13:37:26.464812  333276 cni.go:84] Creating CNI manager for ""
	I1102 13:37:26.464835  333276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:26.464845  333276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:37:26.464866  333276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538419 NodeName:default-k8s-diff-port-538419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:26.464984  333276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:26.465035  333276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:26.474038  333276 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:26.474098  333276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:26.483977  333276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 13:37:26.499882  333276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:26.512917  333276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1102 13:37:26.525720  333276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:26.529537  333276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:26.539879  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:26.630475  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:26.654165  333276 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419 for IP: 192.168.85.2
	I1102 13:37:26.654186  333276 certs.go:195] generating shared ca certs ...
	I1102 13:37:26.654206  333276 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:26.654367  333276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:26.654420  333276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:26.654431  333276 certs.go:257] generating profile certs ...
	I1102 13:37:26.654503  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/client.key
	I1102 13:37:26.654557  333276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key.ff08289d
	I1102 13:37:26.654639  333276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key
	I1102 13:37:26.654737  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:26.654764  333276 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:26.654773  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:26.654795  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:26.654816  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:26.654836  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:26.654873  333276 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:26.655534  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:26.675380  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:26.694442  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:26.715145  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:26.740328  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 13:37:26.762384  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 13:37:26.779554  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:26.801750  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/default-k8s-diff-port-538419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:37:26.818827  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:26.836709  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:26.855014  333276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:26.874155  333276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:26.887334  333276 ssh_runner.go:195] Run: openssl version
	I1102 13:37:26.893721  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:26.902112  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905794  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.905842  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:26.942658  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:26.950976  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:26.959359  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963079  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:26.963124  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.004948  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.013797  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.023152  333276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027166  333276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.027232  333276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.065532  333276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.074165  333276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.078238  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.117094  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:27.159482  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:27.208066  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:27.263395  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:27.326908  333276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:27.369723  333276 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538419 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:27.369813  333276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:27.369901  333276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:27.406986  333276 cri.go:89] found id: "9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc"
	I1102 13:37:27.407007  333276 cri.go:89] found id: "4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca"
	I1102 13:37:27.407013  333276 cri.go:89] found id: "59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae"
	I1102 13:37:27.407018  333276 cri.go:89] found id: "9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3"
	I1102 13:37:27.407022  333276 cri.go:89] found id: ""
	I1102 13:37:27.407085  333276 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:27.422941  333276 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:27.423012  333276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:27.432001  333276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:27.432029  333276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:27.432125  333276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:27.441699  333276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:27.442817  333276 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538419" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.443582  333276 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538419" cluster setting kubeconfig missing "default-k8s-diff-port-538419" context setting]
	I1102 13:37:27.444782  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.446868  333276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:27.456310  333276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 13:37:27.456342  333276 kubeadm.go:602] duration metric: took 24.307485ms to restartPrimaryControlPlane
	I1102 13:37:27.456351  333276 kubeadm.go:403] duration metric: took 86.638872ms to StartCluster
	I1102 13:37:27.456373  333276 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.456425  333276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:27.458467  333276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.458734  333276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:27.458787  333276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:27.458879  333276 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458899  333276 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.458911  333276 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:27.458908  333276 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458932  333276 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538419"
	I1102 13:37:27.458925  333276 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538419"
	I1102 13:37:27.458942  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	W1102 13:37:27.458947  333276 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:27.458958  333276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538419"
	I1102 13:37:27.458977  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.459272  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459436  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.459713  333276 config.go:182] Loaded profile config "default-k8s-diff-port-538419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:27.463479  333276 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:27.466531  333276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.489401  333276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:27.489460  333276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.490695  333276 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538419"
	W1102 13:37:27.490742  333276 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:27.490779  333276 host.go:66] Checking if "default-k8s-diff-port-538419" exists ...
	I1102 13:37:27.490905  333276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.490993  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:27.491127  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.491342  333276 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538419 --format={{.State.Status}}
	I1102 13:37:27.492226  333276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1102 13:37:24.634329  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:27.133336  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:27.068545  333962 kubeadm.go:884] updating cluster {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:37:27.068680  333962 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:37:27.068745  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.101393  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.101420  333962 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:37:27.101479  333962 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:37:27.128092  333962 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:37:27.128116  333962 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:37:27.128126  333962 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 13:37:27.128251  333962 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-066482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:37:27.128346  333962 ssh_runner.go:195] Run: crio config
	I1102 13:37:27.177989  333962 cni.go:84] Creating CNI manager for ""
	I1102 13:37:27.178010  333962 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:37:27.178023  333962 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 13:37:27.178058  333962 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-066482 NodeName:newest-cni-066482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:37:27.178237  333962 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-066482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:37:27.178304  333962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:37:27.189125  333962 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:37:27.189195  333962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:37:27.198724  333962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:37:27.212769  333962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:37:27.228632  333962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1102 13:37:27.246146  333962 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:37:27.251613  333962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:37:27.264788  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:27.377806  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.402967  333962 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482 for IP: 192.168.76.2
	I1102 13:37:27.402990  333962 certs.go:195] generating shared ca certs ...
	I1102 13:37:27.403009  333962 certs.go:227] acquiring lock for ca certs: {Name:mk94bee420d2083822d4a5b3f03b76819aaa139f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:27.403159  333962 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key
	I1102 13:37:27.403219  333962 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key
	I1102 13:37:27.403231  333962 certs.go:257] generating profile certs ...
	I1102 13:37:27.403335  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/client.key
	I1102 13:37:27.403407  333962 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key.c4504c8b
	I1102 13:37:27.403461  333962 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key
	I1102 13:37:27.403744  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem (1338 bytes)
	W1102 13:37:27.403786  333962 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914_empty.pem, impossibly tiny 0 bytes
	I1102 13:37:27.403799  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca-key.pem (1675 bytes)
	I1102 13:37:27.403828  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/ca.pem (1078 bytes)
	I1102 13:37:27.403859  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:37:27.403889  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/certs/key.pem (1679 bytes)
	I1102 13:37:27.403938  333962 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem (1708 bytes)
	I1102 13:37:27.404687  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:37:27.430704  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1102 13:37:27.452417  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:37:27.483637  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:37:27.517977  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:37:27.573265  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:37:27.598304  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:37:27.618317  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/newest-cni-066482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:37:27.639808  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/ssl/certs/129142.pem --> /usr/share/ca-certificates/129142.pem (1708 bytes)
	I1102 13:37:27.657181  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:37:27.681070  333962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-9416/.minikube/certs/12914.pem --> /usr/share/ca-certificates/12914.pem (1338 bytes)
	I1102 13:37:27.704152  333962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:37:27.722253  333962 ssh_runner.go:195] Run: openssl version
	I1102 13:37:27.731519  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:37:27.743037  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748191  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 12:47 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.748248  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:37:27.799685  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:37:27.809081  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12914.pem && ln -fs /usr/share/ca-certificates/12914.pem /etc/ssl/certs/12914.pem"
	I1102 13:37:27.818029  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822628  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 12:53 /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.822681  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12914.pem
	I1102 13:37:27.881477  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12914.pem /etc/ssl/certs/51391683.0"
	I1102 13:37:27.891397  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129142.pem && ln -fs /usr/share/ca-certificates/129142.pem /etc/ssl/certs/129142.pem"
	I1102 13:37:27.900808  333962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904551  333962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 12:53 /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.904621  333962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129142.pem
	I1102 13:37:27.942963  333962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129142.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:37:27.952008  333962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:37:27.956221  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:37:27.997863  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:37:28.047948  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:37:28.098660  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:37:28.159695  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:37:28.224833  333962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:37:28.294684  333962 kubeadm.go:401] StartCluster: {Name:newest-cni-066482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-066482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:37:28.294796  333962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:37:28.294862  333962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:37:28.338693  333962 cri.go:89] found id: "a2d506030cda6d875bd7f355684f5c35e2258d147a0e61553747aae3c6b86db9"
	I1102 13:37:28.338718  333962 cri.go:89] found id: "9244b3749165cc6d1152b3aea619a9f3b06a320ff7349265dc55280531b5447c"
	I1102 13:37:28.338726  333962 cri.go:89] found id: "119e599a978f8ef0c3e7f7da05213c782cabded7c3d9e2e2c0871a008b45454a"
	I1102 13:37:28.338732  333962 cri.go:89] found id: "b46475f69b265dbe271302b636e35104400109075dfef091cb2a202e60f5e119"
	I1102 13:37:28.338766  333962 cri.go:89] found id: ""
	I1102 13:37:28.338853  333962 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:37:28.354945  333962 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:37:28Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:37:28.355009  333962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:37:28.369068  333962 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:37:28.369089  333962 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:37:28.369134  333962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:37:28.379230  333962 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:37:28.380715  333962 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-066482" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.381840  333962 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-9416/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-066482" cluster setting kubeconfig missing "newest-cni-066482" context setting]
	I1102 13:37:28.383187  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.385699  333962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:37:28.395624  333962 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 13:37:28.395794  333962 kubeadm.go:602] duration metric: took 26.694184ms to restartPrimaryControlPlane
	I1102 13:37:28.395818  333962 kubeadm.go:403] duration metric: took 101.142697ms to StartCluster
	I1102 13:37:28.395872  333962 settings.go:142] acquiring lock: {Name:mk5677b9226b5cf3e40f4fd3a607237cf5a03844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.396257  333962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:37:28.398943  333962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/kubeconfig: {Name:mk8cf6fa389ac56de0504f7fecc6dfc4028d1b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:37:28.399509  333962 config.go:182] Loaded profile config "newest-cni-066482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:37:28.399593  333962 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:37:28.399697  333962 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-066482"
	I1102 13:37:28.399715  333962 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-066482"
	W1102 13:37:28.399723  333962 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:37:28.399747  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400242  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400322  333962 addons.go:70] Setting dashboard=true in profile "newest-cni-066482"
	I1102 13:37:28.400358  333962 addons.go:239] Setting addon dashboard=true in "newest-cni-066482"
	W1102 13:37:28.400367  333962 addons.go:248] addon dashboard should already be in state true
	I1102 13:37:28.400398  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.400424  333962 addons.go:70] Setting default-storageclass=true in profile "newest-cni-066482"
	I1102 13:37:28.400440  333962 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-066482"
	I1102 13:37:28.400747  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.400930  333962 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:37:28.401517  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.404755  333962 out.go:179] * Verifying Kubernetes components...
	I1102 13:37:28.405862  333962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:37:28.441415  333962 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 13:37:28.441452  333962 addons.go:239] Setting addon default-storageclass=true in "newest-cni-066482"
	W1102 13:37:28.441469  333962 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:37:28.441497  333962 host.go:66] Checking if "newest-cni-066482" exists ...
	I1102 13:37:28.441992  333962 cli_runner.go:164] Run: docker container inspect newest-cni-066482 --format={{.State.Status}}
	I1102 13:37:28.443413  333962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 13:37:28.443587  333962 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:37:27.493290  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:27.493307  333276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:27.493359  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.524914  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.531668  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.532019  333276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.532031  333276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:27.532222  333276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538419
	I1102 13:37:27.567797  333276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/default-k8s-diff-port-538419/id_rsa Username:docker}
	I1102 13:37:27.652323  333276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:27.668241  333276 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:27.674864  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:27.674945  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:27.680089  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:27.693623  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:27.693664  333276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:27.697013  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:27.711998  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:27.712105  333276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:27.730732  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:27.730759  333276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:27.750616  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:27.750640  333276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:27.770302  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:27.770348  333276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:27.786951  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:27.786978  333276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:27.803298  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:27.803327  333276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:27.818949  333276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:27.818969  333276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:27.832390  333276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:29.492024  333276 node_ready.go:49] node "default-k8s-diff-port-538419" is "Ready"
	I1102 13:37:29.492059  333276 node_ready.go:38] duration metric: took 1.82377358s for node "default-k8s-diff-port-538419" to be "Ready" ...
	I1102 13:37:29.492086  333276 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:29.492140  333276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:30.138979  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458843131s)
	I1102 13:37:30.139203  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.306780942s)
	I1102 13:37:30.139232  333276 api_server.go:72] duration metric: took 2.680469941s to wait for apiserver process to appear ...
	I1102 13:37:30.139245  333276 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:30.139262  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.139337  333276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.442032819s)
	I1102 13:37:30.140830  333276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538419 addons enable metrics-server
	
	I1102 13:37:30.144441  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.144472  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:30.146788  333276 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1102 13:37:28.444400  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 13:37:28.444417  333962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 13:37:28.444498  333962 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.444527  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:37:28.444586  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.444500  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.481261  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.483777  333962 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.483797  333962 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:37:28.483850  333962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-066482
	I1102 13:37:28.485369  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.519190  333962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/newest-cni-066482/id_rsa Username:docker}
	I1102 13:37:28.625401  333962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:37:28.638037  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:37:28.653422  333962 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:37:28.653533  333962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:37:28.682341  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:37:28.694090  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 13:37:28.694153  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 13:37:28.716329  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 13:37:28.716362  333962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 13:37:28.737776  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 13:37:28.737802  333962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 13:37:28.755596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 13:37:28.755618  333962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 13:37:28.780596  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 13:37:28.780618  333962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 13:37:28.797326  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 13:37:28.797355  333962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 13:37:28.814533  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 13:37:28.814561  333962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 13:37:28.832611  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 13:37:28.832643  333962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 13:37:28.856649  333962 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:28.856713  333962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 13:37:28.874888  333962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 13:37:31.209184  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.571053535s)
	I1102 13:37:31.209241  333962 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.555675413s)
	I1102 13:37:31.209282  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.526844296s)
	I1102 13:37:31.209372  333962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.334451096s)
	I1102 13:37:31.209287  333962 api_server.go:72] duration metric: took 2.808316845s to wait for apiserver process to appear ...
	I1102 13:37:31.209432  333962 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:37:31.209539  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.211060  333962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-066482 addons enable metrics-server
	
	I1102 13:37:31.216831  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.216854  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.222003  333962 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1102 13:37:28.750465  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	W1102 13:37:30.751057  321355 pod_ready.go:104] pod "coredns-66bc5c9577-2dtpc" is not "Ready", error: <nil>
	I1102 13:37:31.223225  333962 addons.go:515] duration metric: took 2.823637855s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:31.709830  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:31.714383  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:31.714411  333962 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:32.209645  333962 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 13:37:32.214358  333962 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 13:37:32.215702  333962 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:32.215723  333962 api_server.go:131] duration metric: took 1.006197716s to wait for apiserver health ...
	I1102 13:37:32.215740  333962 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:32.219326  333962 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:32.219361  333962 system_pods.go:61] "coredns-66bc5c9577-9knvp" [fc8ccf3a-6c3a-4df9-b174-358eea8022b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219370  333962 system_pods.go:61] "etcd-newest-cni-066482" [b4f125a2-c9c3-4192-bf23-c4ad050bb815] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:32.219379  333962 system_pods.go:61] "kindnet-schdw" [74998f6e-2a7a-40d8-a5c2-a1142f69ee93] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:37:32.219392  333962 system_pods.go:61] "kube-apiserver-newest-cni-066482" [e270489b-3057-480f-96dd-329cbcc6f0e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:32.219397  333962 system_pods.go:61] "kube-controller-manager-newest-cni-066482" [9b62b1ef-e72e-41f9-9e3d-c57bfaf0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:32.219403  333962 system_pods.go:61] "kube-proxy-fkp22" [85a24a6f-4f8c-4671-92f6-fbe43ab7bb10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:37:32.219408  333962 system_pods.go:61] "kube-scheduler-newest-cni-066482" [5f88460d-ea42-4891-a458-b86eb57b551e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:32.219417  333962 system_pods.go:61] "storage-provisioner" [3bbb95ec-ecf8-4335-b3df-82a08d03b66b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 13:37:32.219424  333962 system_pods.go:74] duration metric: took 3.677705ms to wait for pod list to return data ...
	I1102 13:37:32.219434  333962 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:32.221997  333962 default_sa.go:45] found service account: "default"
	I1102 13:37:32.222015  333962 default_sa.go:55] duration metric: took 2.576388ms for default service account to be created ...
	I1102 13:37:32.222026  333962 kubeadm.go:587] duration metric: took 3.821064355s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 13:37:32.222059  333962 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:32.224451  333962 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:32.224479  333962 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:32.224495  333962 node_conditions.go:105] duration metric: took 2.431117ms to run NodePressure ...
	I1102 13:37:32.224508  333962 start.go:242] waiting for startup goroutines ...
	I1102 13:37:32.224519  333962 start.go:247] waiting for cluster config update ...
	I1102 13:37:32.224531  333962 start.go:256] writing updated cluster config ...
	I1102 13:37:32.224891  333962 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:32.277880  333962 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:32.280437  333962 out.go:179] * Done! kubectl is now configured to use "newest-cni-066482" cluster and "default" namespace by default
	W1102 13:37:29.133694  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:31.633878  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:32.248764  321355 pod_ready.go:94] pod "coredns-66bc5c9577-2dtpc" is "Ready"
	I1102 13:37:32.248791  321355 pod_ready.go:86] duration metric: took 36.005777547s for pod "coredns-66bc5c9577-2dtpc" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.251505  321355 pod_ready.go:83] waiting for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.256003  321355 pod_ready.go:94] pod "etcd-no-preload-978795" is "Ready"
	I1102 13:37:32.256030  321355 pod_ready.go:86] duration metric: took 4.500033ms for pod "etcd-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.258154  321355 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.262361  321355 pod_ready.go:94] pod "kube-apiserver-no-preload-978795" is "Ready"
	I1102 13:37:32.262386  321355 pod_ready.go:86] duration metric: took 4.208933ms for pod "kube-apiserver-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.264670  321355 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.446929  321355 pod_ready.go:94] pod "kube-controller-manager-no-preload-978795" is "Ready"
	I1102 13:37:32.446958  321355 pod_ready.go:86] duration metric: took 182.263594ms for pod "kube-controller-manager-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:32.647228  321355 pod_ready.go:83] waiting for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.046223  321355 pod_ready.go:94] pod "kube-proxy-rmkmd" is "Ready"
	I1102 13:37:33.046245  321355 pod_ready.go:86] duration metric: took 398.98563ms for pod "kube-proxy-rmkmd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.247357  321355 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646686  321355 pod_ready.go:94] pod "kube-scheduler-no-preload-978795" is "Ready"
	I1102 13:37:33.646712  321355 pod_ready.go:86] duration metric: took 399.328602ms for pod "kube-scheduler-no-preload-978795" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:33.646724  321355 pod_ready.go:40] duration metric: took 37.476249238s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:33.693279  321355 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:33.695127  321355 out.go:179] * Done! kubectl is now configured to use "no-preload-978795" cluster and "default" namespace by default
	I1102 13:37:30.148737  333276 addons.go:515] duration metric: took 2.689945409s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1102 13:37:30.639704  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:30.646596  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:37:30.646625  333276 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:37:31.140024  333276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1102 13:37:31.144505  333276 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1102 13:37:31.145652  333276 api_server.go:141] control plane version: v1.34.1
	I1102 13:37:31.145677  333276 api_server.go:131] duration metric: took 1.006426268s to wait for apiserver health ...
	I1102 13:37:31.145686  333276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:37:31.148654  333276 system_pods.go:59] 8 kube-system pods found
	I1102 13:37:31.148693  333276 system_pods.go:61] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.148706  333276 system_pods.go:61] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.148715  333276 system_pods.go:61] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.148725  333276 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.148735  333276 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.148740  333276 system_pods.go:61] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.148749  333276 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.148752  333276 system_pods.go:61] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.148758  333276 system_pods.go:74] duration metric: took 3.0672ms to wait for pod list to return data ...
	I1102 13:37:31.148767  333276 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:37:31.151024  333276 default_sa.go:45] found service account: "default"
	I1102 13:37:31.151047  333276 default_sa.go:55] duration metric: took 2.27431ms for default service account to be created ...
	I1102 13:37:31.151056  333276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:37:31.153886  333276 system_pods.go:86] 8 kube-system pods found
	I1102 13:37:31.153909  333276 system_pods.go:89] "coredns-66bc5c9577-4xsxx" [89d1e97a-38e0-47b8-a6c4-4615003a5618] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:37:31.153917  333276 system_pods.go:89] "etcd-default-k8s-diff-port-538419" [e96c82b9-4852-489f-97fd-dacb31bef09a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:37:31.153923  333276 system_pods.go:89] "kindnet-gc6n2" [51ce1d18-d59b-408c-b247-1f51a7f81bb0] Running
	I1102 13:37:31.153933  333276 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538419" [fb5e8926-5d1b-4b28-8552-9e0018ff1e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:37:31.153941  333276 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538419" [6d2e5ca9-205c-4c28-b68e-585bd4ecc260] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:37:31.153948  333276 system_pods.go:89] "kube-proxy-nnhqs" [df597ea0-03ac-465d-84e3-2ddca37151d2] Running
	I1102 13:37:31.153953  333276 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538419" [3cf768e9-b18f-4f4f-900a-0547de33cdef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:37:31.153958  333276 system_pods.go:89] "storage-provisioner" [743c59db-77d8-44d3-85b6-fa5d0e288d93] Running
	I1102 13:37:31.153965  333276 system_pods.go:126] duration metric: took 2.903516ms to wait for k8s-apps to be running ...
	I1102 13:37:31.153973  333276 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:37:31.154011  333276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:37:31.167191  333276 system_svc.go:56] duration metric: took 13.212049ms WaitForService to wait for kubelet
	I1102 13:37:31.167214  333276 kubeadm.go:587] duration metric: took 3.70845301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:37:31.167229  333276 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:37:31.170065  333276 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1102 13:37:31.170091  333276 node_conditions.go:123] node cpu capacity is 8
	I1102 13:37:31.170118  333276 node_conditions.go:105] duration metric: took 2.883566ms to run NodePressure ...
	I1102 13:37:31.170133  333276 start.go:242] waiting for startup goroutines ...
	I1102 13:37:31.170146  333276 start.go:247] waiting for cluster config update ...
	I1102 13:37:31.170163  333276 start.go:256] writing updated cluster config ...
	I1102 13:37:31.170468  333276 ssh_runner.go:195] Run: rm -f paused
	I1102 13:37:31.174099  333276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:31.178339  333276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:37:33.184101  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:34.134125  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:36.633840  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:35.685411  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:38.184423  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:39.134511  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:41.633152  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:40.683713  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.183801  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:43.634797  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:46.133702  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	W1102 13:37:45.684695  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.183904  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:48.633463  328990 pod_ready.go:104] pod "coredns-66bc5c9577-vpq66" is not "Ready", error: <nil>
	I1102 13:37:49.633961  328990 pod_ready.go:94] pod "coredns-66bc5c9577-vpq66" is "Ready"
	I1102 13:37:49.633983  328990 pod_ready.go:86] duration metric: took 36.006114822s for pod "coredns-66bc5c9577-vpq66" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.636373  328990 pod_ready.go:83] waiting for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.640305  328990 pod_ready.go:94] pod "etcd-embed-certs-748183" is "Ready"
	I1102 13:37:49.640326  328990 pod_ready.go:86] duration metric: took 3.933112ms for pod "etcd-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.642169  328990 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.645917  328990 pod_ready.go:94] pod "kube-apiserver-embed-certs-748183" is "Ready"
	I1102 13:37:49.645933  328990 pod_ready.go:86] duration metric: took 3.743148ms for pod "kube-apiserver-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.647713  328990 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:49.832391  328990 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748183" is "Ready"
	I1102 13:37:49.832415  328990 pod_ready.go:86] duration metric: took 184.682932ms for pod "kube-controller-manager-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.032477  328990 pod_ready.go:83] waiting for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.432219  328990 pod_ready.go:94] pod "kube-proxy-pg8nt" is "Ready"
	I1102 13:37:50.432252  328990 pod_ready.go:86] duration metric: took 399.749991ms for pod "kube-proxy-pg8nt" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:50.632021  328990 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032263  328990 pod_ready.go:94] pod "kube-scheduler-embed-certs-748183" is "Ready"
	I1102 13:37:51.032285  328990 pod_ready.go:86] duration metric: took 400.23928ms for pod "kube-scheduler-embed-certs-748183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:37:51.032297  328990 pod_ready.go:40] duration metric: took 37.407986415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:37:51.078471  328990 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:37:51.080252  328990 out.go:179] * Done! kubectl is now configured to use "embed-certs-748183" cluster and "default" namespace by default
	W1102 13:37:50.684482  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:52.684813  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:55.183972  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:37:57.684208  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:00.183283  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:02.184008  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	W1102 13:38:04.683177  333276 pod_ready.go:104] pod "coredns-66bc5c9577-4xsxx" is not "Ready", error: <nil>
	I1102 13:38:06.683209  333276 pod_ready.go:94] pod "coredns-66bc5c9577-4xsxx" is "Ready"
	I1102 13:38:06.683235  333276 pod_ready.go:86] duration metric: took 35.504872374s for pod "coredns-66bc5c9577-4xsxx" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.686499  333276 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.690658  333276 pod_ready.go:94] pod "etcd-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.690683  333276 pod_ready.go:86] duration metric: took 4.162031ms for pod "etcd-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.692830  333276 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.696597  333276 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.696620  333276 pod_ready.go:86] duration metric: took 3.762714ms for pod "kube-apiserver-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.698448  333276 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:06.881706  333276 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:06.881742  333276 pod_ready.go:86] duration metric: took 183.271121ms for pod "kube-controller-manager-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.082248  333276 pod_ready.go:83] waiting for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.481632  333276 pod_ready.go:94] pod "kube-proxy-nnhqs" is "Ready"
	I1102 13:38:07.481661  333276 pod_ready.go:86] duration metric: took 399.382528ms for pod "kube-proxy-nnhqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:07.682180  333276 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:08.081746  333276 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-538419" is "Ready"
	I1102 13:38:08.081771  333276 pod_ready.go:86] duration metric: took 399.564273ms for pod "kube-scheduler-default-k8s-diff-port-538419" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:38:08.081786  333276 pod_ready.go:40] duration metric: took 36.907651629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:38:08.128554  333276 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1102 13:38:08.130999  333276 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-538419" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.589229473Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.824762566Z" level=info msg="Removing container: 902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6" id=210e6115-4dd6-4c45-9410-2334d2ff067c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:37:40 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:37:40.834181029Z" level=info msg="Removed container 902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=210e6115-4dd6-4c45-9410-2334d2ff067c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.874972471Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25e86424-b701-4acc-a6ea-492936b082bf name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.875971928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5fa22bf2-7a05-42cb-87c4-6807198ec69a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.877037252Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=400be694-030c-4dce-9aef-5e14529ac869 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.877180936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.88288817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.88302266Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/50bd5454a23bc96cf7d6acf7e16a4b51c988f578967ca4ff6fe0c9ceadafac99/merged/etc/passwd: no such file or directory"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.883043132Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/50bd5454a23bc96cf7d6acf7e16a4b51c988f578967ca4ff6fe0c9ceadafac99/merged/etc/group: no such file or directory"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.883249721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.911110496Z" level=info msg="Created container b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0: kube-system/storage-provisioner/storage-provisioner" id=400be694-030c-4dce-9aef-5e14529ac869 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.911721096Z" level=info msg="Starting container: b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0" id=8240f358-868c-4549-bfbf-f05257eb4ae3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:38:00 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:00.913608152Z" level=info msg="Started container" PID=1784 containerID=b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0 description=kube-system/storage-provisioner/storage-provisioner id=8240f358-868c-4549-bfbf-f05257eb4ae3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b56c51c10c9eaed681ff6242fa8d278869d1009d24a551d25dac01cbd38df896
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.756238558Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed552145-3791-40a9-bc5a-1fbfaf874af9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.757138337Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c3fd92e6-ecab-4384-ba22-d3010fe94f35 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.758085004Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=fa0346dc-9c59-4999-b308-96612f770f05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.758206888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.763709034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.764481647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.804393093Z" level=info msg="Created container 2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=fa0346dc-9c59-4999-b308-96612f770f05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.805085769Z" level=info msg="Starting container: 2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2" id=a9307887-512c-4966-aad8-9ad8b9380816 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.80716673Z" level=info msg="Started container" PID=1800 containerID=2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper id=a9307887-512c-4966-aad8-9ad8b9380816 name=/runtime.v1.RuntimeService/StartContainer sandboxID=04a715c66e00e0a6dbab16a090dfd35972bab8d6acb440739d246e62bbfd837d
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.885837048Z" level=info msg="Removing container: 667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561" id=8d2f61dd-8b82-48c5-b7d9-8a9f712b3e38 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:38:02 default-k8s-diff-port-538419 crio[594]: time="2025-11-02T13:38:02.897773788Z" level=info msg="Removed container 667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k/dashboard-metrics-scraper" id=8d2f61dd-8b82-48c5-b7d9-8a9f712b3e38 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	2060b3aa9bf65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   04a715c66e00e       dashboard-metrics-scraper-6ffb444bf9-98t5k             kubernetes-dashboard
	b5f1c0f89cbd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   b56c51c10c9ea       storage-provisioner                                    kube-system
	3b4d565f2df6b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   1cd27e51b91d3       kubernetes-dashboard-855c9754f9-zcdhn                  kubernetes-dashboard
	c9b5ad92438bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   5a77fb4aae810       coredns-66bc5c9577-4xsxx                               kube-system
	77e108f874417       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   4578bbab212f8       busybox                                                default
	a1deaef6b0856       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   b56c51c10c9ea       storage-provisioner                                    kube-system
	5893cf1512ee0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   d91ae740f3506       kube-proxy-nnhqs                                       kube-system
	9fe26d5a73cb2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   3b89bde86bdbc       kindnet-gc6n2                                          kube-system
	9c0a5c5252f4d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   2f6f9e440309d       etcd-default-k8s-diff-port-538419                      kube-system
	4b0ca32f1b94d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   1a42344b48320       kube-scheduler-default-k8s-diff-port-538419            kube-system
	59c16f4262360       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   11ec43a7f2236       kube-controller-manager-default-k8s-diff-port-538419   kube-system
	9d75eaf3dc03d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   fd4560b819579       kube-apiserver-default-k8s-diff-port-538419            kube-system
	
	
	==> coredns [c9b5ad92438bb88eb2038be88d7936f90369f0d2d1fbc95af1cb6ec286ad7cee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52831 - 62528 "HINFO IN 4975869981560521564.5184462275221150874. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067480545s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-538419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-538419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_36_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:36:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538419
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:38:00 +0000   Sun, 02 Nov 2025 13:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-538419
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a8e8c9a3-24d1-4403-8143-5254b74d1185
	  Boot ID:                    23a23e75-aab1-4c9b-8448-aed5ef894a6b
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-4xsxx                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-538419                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-gc6n2                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538419             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538419    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-nnhqs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538419             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98t5k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zcdhn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 111s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                 node-controller  Node default-k8s-diff-port-538419 event: Registered Node default-k8s-diff-port-538419 in Controller
	  Normal  NodeReady                100s                 kubelet          Node default-k8s-diff-port-538419 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node default-k8s-diff-port-538419 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-538419 event: Registered Node default-k8s-diff-port-538419 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: d2 73 a9 9b 41 62 e2 07 1a 31 72 fa 08 00
	[Nov 2 13:33] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[  +8.488078] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[  +4.489006] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fa d2 17 0e f1 08 06
	[  +0.000362] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e d5 29 ad e6 43 08 06
	[ +13.631233] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e a7 d3 27 bc 97 08 06
	[  +0.000413] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 c2 d4 0c ae cd 08 06
	[Nov 2 13:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a f3 d0 5a 13 de 08 06
	[  +0.001019] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	[Nov 2 13:35] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 02 36 63 7b 00 08 06
	[  +0.000379] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 d8 e3 e5 9f 45 08 06
	[ +22.255157] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c7 6b 72 61 92 08 06
	[  +0.000415] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6f fe 4c 80 7a 08 06
	
	
	==> etcd [9c0a5c5252f4d56b59b64d2c1d9c568cfc1da79c67c1dcec63e8421696e599fc] <==
	{"level":"warn","ts":"2025-11-02T13:37:28.713852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:49164: read: connection reset by peer"}
	{"level":"warn","ts":"2025-11-02T13:37:28.725303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.734810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.747052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.757952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.767704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.778291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.785963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.799589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.808196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.816700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.825068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.832820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.842053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.864614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.872605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.884113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.888179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.897443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.904797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.925332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.933169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.941393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:37:28.986475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49548","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T13:37:41.947832Z","caller":"traceutil/trace.go:172","msg":"trace[1515664450] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"114.480878ms","start":"2025-11-02T13:37:41.833316Z","end":"2025-11-02T13:37:41.947797Z","steps":["trace[1515664450] 'process raft request'  (duration: 112.102393ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:38:24 up  1:20,  0 user,  load average: 2.04, 3.51, 2.58
	Linux default-k8s-diff-port-538419 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9fe26d5a73cb2e5383872650fb2ecf2e6884d1ef50222efe25cfb4164f2b146f] <==
	I1102 13:37:30.364425       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:37:30.364701       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 13:37:30.364874       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:37:30.364895       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:37:30.364919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:37:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:37:30.569964       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:37:30.570027       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:37:30.570037       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:37:30.570181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:37:30.972314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:37:30.972349       1 metrics.go:72] Registering metrics
	I1102 13:37:30.972417       1 controller.go:711] "Syncing nftables rules"
	I1102 13:37:40.570097       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:37:40.570176       1 main.go:301] handling current node
	I1102 13:37:50.572115       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:37:50.572174       1 main.go:301] handling current node
	I1102 13:38:00.569637       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:00.569671       1 main.go:301] handling current node
	I1102 13:38:10.569765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:10.569802       1 main.go:301] handling current node
	I1102 13:38:20.577873       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 13:38:20.577904       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d75eaf3dc03db1c1123cb6f5efb6e26e31e9dfde569818d3081032549d3aaa3] <==
	I1102 13:37:29.513992       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 13:37:29.516502       1 aggregator.go:171] initial CRD sync complete...
	I1102 13:37:29.516559       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 13:37:29.516603       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:37:29.516629       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:37:29.516885       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 13:37:29.517508       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1102 13:37:29.521867       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:37:29.532022       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:37:29.542005       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:37:29.545236       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 13:37:29.558433       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1102 13:37:29.558541       1 policy_source.go:240] refreshing policies
	I1102 13:37:29.613415       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 13:37:29.882354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:37:29.892631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 13:37:29.926003       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:37:29.946880       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:37:29.960932       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:37:30.000241       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.91.164"}
	I1102 13:37:30.012349       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.253.253"}
	I1102 13:37:30.413179       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:37:33.265835       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:37:33.317096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:37:33.367607       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59c16f4262360662e0308b370e7a67959a5b06e8cc028e564875f164a10457ae] <==
	I1102 13:37:32.847152       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 13:37:32.848345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:37:32.852553       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:37:32.854813       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:37:32.862315       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:37:32.862440       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:37:32.863462       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:37:32.863517       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:37:32.863533       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:37:32.863547       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:37:32.863558       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:37:32.863586       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 13:37:32.863655       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:37:32.864022       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:37:32.865951       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:37:32.868254       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 13:37:32.868358       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:37:32.869483       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:37:32.871830       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 13:37:32.873191       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 13:37:32.873291       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 13:37:32.873388       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-538419"
	I1102 13:37:32.873437       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 13:37:32.877296       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:37:32.881520       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5893cf1512ee0f6c8e74166fa347d602d16b90bbd7c1a8790852d522434c5fb6] <==
	I1102 13:37:30.178314       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:37:30.245710       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:37:30.346192       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:37:30.346254       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 13:37:30.346821       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:37:30.366049       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:37:30.366103       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:37:30.371161       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:37:30.371523       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:37:30.371538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:30.372597       1 config.go:200] "Starting service config controller"
	I1102 13:37:30.372657       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:37:30.372672       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:37:30.372739       1 config.go:309] "Starting node config controller"
	I1102 13:37:30.372749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:37:30.372756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:37:30.372763       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:37:30.372777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:37:30.372659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:37:30.472825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:37:30.474453       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:37:30.474457       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4b0ca32f1b94d4f05bd8579ce828633e44dc5642711c637607253d1f58fba4ca] <==
	I1102 13:37:28.251412       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:37:29.480654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:37:29.480690       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:37:29.480701       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:37:29.480710       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:37:29.520723       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:37:29.520855       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:37:29.525635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:29.525718       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:37:29.526334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:37:29.526421       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:37:29.625899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500371     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl58\" (UniqueName: \"kubernetes.io/projected/a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d-kube-api-access-fjl58\") pod \"kubernetes-dashboard-855c9754f9-zcdhn\" (UID: \"a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn"
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500423     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8caa2918-0909-4d44-b89f-b91d119bf2dc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-98t5k\" (UID: \"8caa2918-0909-4d44-b89f-b91d119bf2dc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k"
	Nov 02 13:37:33 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:33.500631     751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zcdhn\" (UID: \"a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn"
	Nov 02 13:37:36 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:36.179540     751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 13:37:36 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:36.823070     751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zcdhn" podStartSLOduration=0.916690371 podStartE2EDuration="3.823041771s" podCreationTimestamp="2025-11-02 13:37:33 +0000 UTC" firstStartedPulling="2025-11-02 13:37:33.763077882 +0000 UTC m=+7.107039008" lastFinishedPulling="2025-11-02 13:37:36.66942926 +0000 UTC m=+10.013390408" observedRunningTime="2025-11-02 13:37:36.822996341 +0000 UTC m=+10.166957487" watchObservedRunningTime="2025-11-02 13:37:36.823041771 +0000 UTC m=+10.167002916"
	Nov 02 13:37:39 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:39.819578     751 scope.go:117] "RemoveContainer" containerID="902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:40.823358     751 scope.go:117] "RemoveContainer" containerID="902076b4f117d5111fb4cb9e9e5feb66c35ebb663fded9cdf64fb74ecfa0a4a6"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:40.823502     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:40 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:40.823769     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:37:41 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:41.827068     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:41 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:41.827287     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:37:49 default-k8s-diff-port-538419 kubelet[751]: I1102 13:37:49.468970     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:37:49 default-k8s-diff-port-538419 kubelet[751]: E1102 13:37:49.469202     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:00 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:00.874515     751 scope.go:117] "RemoveContainer" containerID="a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.755753     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.884224     751 scope.go:117] "RemoveContainer" containerID="667be3b1c28ab695b67ea9c0e2f0536bce84c82eb45a555dae2bbb35e695d561"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:02.884466     751 scope.go:117] "RemoveContainer" containerID="2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	Nov 02 13:38:02 default-k8s-diff-port-538419 kubelet[751]: E1102 13:38:02.884697     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:09 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:09.468908     751 scope.go:117] "RemoveContainer" containerID="2060b3aa9bf6596c4b58b9872d200c998aa2810b16c92e24d4246982d9eeb5e2"
	Nov 02 13:38:09 default-k8s-diff-port-538419 kubelet[751]: E1102 13:38:09.469144     751 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98t5k_kubernetes-dashboard(8caa2918-0909-4d44-b89f-b91d119bf2dc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98t5k" podUID="8caa2918-0909-4d44-b89f-b91d119bf2dc"
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 13:38:20 default-k8s-diff-port-538419 kubelet[751]: I1102 13:38:20.195469     751 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 02 13:38:20 default-k8s-diff-port-538419 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [3b4d565f2df6b7af050261a5726ef42418b7b75d9b27549b6ac006690f117bb7] <==
	2025/11/02 13:37:36 Using namespace: kubernetes-dashboard
	2025/11/02 13:37:36 Using in-cluster config to connect to apiserver
	2025/11/02 13:37:36 Using secret token for csrf signing
	2025/11/02 13:37:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 13:37:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 13:37:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 13:37:36 Generating JWE encryption key
	2025/11/02 13:37:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 13:37:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 13:37:37 Initializing JWE encryption key from synchronized object
	2025/11/02 13:37:37 Creating in-cluster Sidecar client
	2025/11/02 13:37:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:37 Serving insecurely on HTTP port: 9090
	2025/11/02 13:38:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 13:37:36 Starting overwatch
	
	
	==> storage-provisioner [a1deaef6b0856c7956d3d4a765a97d00c3bbca5496687d19141dbb5eebfcbe1e] <==
	I1102 13:37:30.142705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 13:38:00.145516       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b5f1c0f89cbd2f51c3aaa26c52b8294c097315dfc1d6837326aa1ee0f4d16da0] <==
	I1102 13:38:00.925554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:38:00.931882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:38:00.931913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:38:00.933893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:04.389170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:08.650508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:12.249267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:15.302207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.324610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.328838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:18.328995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:38:18.329068       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"030faaca-fc27-4b34-be7e-e6cc7b667e6a", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b became leader
	I1102 13:38:18.329141       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b!
	W1102 13:38:18.330818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:18.334937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:38:18.429365       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538419_69e20460-1f5a-4fce-91b8-a20e50b4f13b!
	W1102 13:38:20.338013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:20.341820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:22.345616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:22.350378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:24.353784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:38:24.357359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419: exit status 2 (316.995829ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.83s)

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.22
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.26
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.82
22 TestOffline 53.4
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 163.87
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 8.43
48 TestAddons/StoppedEnableDisable 18.93
49 TestCertOptions 31.77
50 TestCertExpiration 224.65
52 TestForceSystemdFlag 25.7
53 TestForceSystemdEnv 33.92
58 TestErrorSpam/setup 19.47
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 5.4
62 TestErrorSpam/unpause 6.22
63 TestErrorSpam/stop 8.1
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.99
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.47
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.62
75 TestFunctional/serial/CacheCmd/cache/add_local 0.76
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 62.23
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.28
87 TestFunctional/serial/InvalidService 4.1
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 9.11
91 TestFunctional/parallel/DryRun 0.5
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.94
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 21.31
101 TestFunctional/parallel/SSHCmd 0.65
102 TestFunctional/parallel/CpCmd 1.67
103 TestFunctional/parallel/MySQL 14.96
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.47
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
122 TestFunctional/parallel/ImageCommands/Setup 0.46
123 TestFunctional/parallel/ProfileCmd/profile_list 0.47
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.22
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/MountCmd/any-port 6.74
145 TestFunctional/parallel/MountCmd/specific-port 1.62
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 104.32
163 TestMultiControlPlane/serial/DeployApp 4.71
164 TestMultiControlPlane/serial/PingHostFromPods 1
165 TestMultiControlPlane/serial/AddWorkerNode 24.85
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 17.1
169 TestMultiControlPlane/serial/StopSecondaryNode 13.26
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.06
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 198.15
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.1
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 47.28
177 TestMultiControlPlane/serial/RestartCluster 57.34
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 65.38
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 39.27
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.99
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 31.07
211 TestKicCustomNetwork/use_default_bridge_network 27.04
212 TestKicExistingNetwork 28.51
213 TestKicCustomSubnet 24.1
214 TestKicStaticIP 26.29
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.38
219 TestMountStart/serial/StartWithMountFirst 5.63
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.85
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.37
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 60.49
231 TestMultiNode/serial/DeployApp2Nodes 3.62
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 54.67
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.68
237 TestMultiNode/serial/StopNode 2.25
238 TestMultiNode/serial/StartAfterStop 7.36
239 TestMultiNode/serial/RestartKeepsNodes 68.5
240 TestMultiNode/serial/DeleteNode 5.22
241 TestMultiNode/serial/StopMultiNode 30.34
242 TestMultiNode/serial/RestartMultiNode 48.29
243 TestMultiNode/serial/ValidateNameConflict 23.19
248 TestPreload 103.14
250 TestScheduledStopUnix 96.32
253 TestInsufficientStorage 9.59
254 TestRunningBinaryUpgrade 49.02
256 TestKubernetesUpgrade 309.22
257 TestMissingContainerUpgrade 85.77
259 TestPause/serial/Start 49.61
260 TestPause/serial/SecondStartNoReconfiguration 6.65
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 26.35
272 TestNetworkPlugins/group/false 3.69
276 TestStoppedBinaryUpgrade/Setup 0.58
277 TestStoppedBinaryUpgrade/Upgrade 64.43
278 TestNoKubernetes/serial/StartWithStopK8s 27.55
279 TestNoKubernetes/serial/Start 7.73
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
281 TestNoKubernetes/serial/ProfileList 10.34
282 TestNoKubernetes/serial/Stop 1.36
283 TestNoKubernetes/serial/StartNoArgs 6.53
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
293 TestNetworkPlugins/group/auto/Start 40.56
294 TestNetworkPlugins/group/kindnet/Start 39.31
295 TestNetworkPlugins/group/auto/KubeletFlags 0.31
296 TestNetworkPlugins/group/auto/NetCatPod 9.22
297 TestNetworkPlugins/group/calico/Start 55.17
298 TestNetworkPlugins/group/auto/DNS 0.11
299 TestNetworkPlugins/group/auto/Localhost 0.09
300 TestNetworkPlugins/group/auto/HairPin 0.09
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
303 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
304 TestNetworkPlugins/group/custom-flannel/Start 47.04
305 TestNetworkPlugins/group/kindnet/DNS 0.13
306 TestNetworkPlugins/group/kindnet/Localhost 0.11
307 TestNetworkPlugins/group/kindnet/HairPin 0.1
308 TestNetworkPlugins/group/enable-default-cni/Start 38.49
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.3
311 TestNetworkPlugins/group/calico/NetCatPod 8.2
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
314 TestNetworkPlugins/group/calico/DNS 0.12
315 TestNetworkPlugins/group/calico/Localhost 0.1
316 TestNetworkPlugins/group/calico/HairPin 0.09
317 TestNetworkPlugins/group/custom-flannel/DNS 0.12
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
322 TestNetworkPlugins/group/flannel/Start 51.85
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
326 TestNetworkPlugins/group/bridge/Start 70.18
328 TestStartStop/group/old-k8s-version/serial/FirstStart 49.57
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
331 TestNetworkPlugins/group/flannel/NetCatPod 9.21
332 TestNetworkPlugins/group/flannel/DNS 0.12
333 TestNetworkPlugins/group/flannel/Localhost 0.1
334 TestNetworkPlugins/group/flannel/HairPin 0.1
335 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
337 TestNetworkPlugins/group/bridge/NetCatPod 9.24
339 TestStartStop/group/old-k8s-version/serial/Stop 16.99
341 TestStartStop/group/no-preload/serial/FirstStart 51.43
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.15
344 TestNetworkPlugins/group/bridge/HairPin 0.15
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
346 TestStartStop/group/old-k8s-version/serial/SecondStart 50.32
348 TestStartStop/group/embed-certs/serial/FirstStart 47.53
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.4
351 TestStartStop/group/no-preload/serial/DeployApp 8.28
353 TestStartStop/group/no-preload/serial/Stop 16.29
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
356 TestStartStop/group/embed-certs/serial/DeployApp 8.23
357 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
361 TestStartStop/group/no-preload/serial/SecondStart 51.95
362 TestStartStop/group/embed-certs/serial/Stop 18.12
364 TestStartStop/group/newest-cni/serial/FirstStart 28.23
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.75
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
369 TestStartStop/group/embed-certs/serial/SecondStart 48.67
370 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/Stop 2.76
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.03
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
376 TestStartStop/group/newest-cni/serial/SecondStart 12.18
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
385 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (5.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-793938 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-793938 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.224463636s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1102 12:47:03.441677   12914 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1102 12:47:03.441773   12914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-793938
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-793938: exit status 85 (73.966006ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-793938 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-793938 │ jenkins │ v1.37.0 │ 02 Nov 25 12:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 12:46:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 12:46:58.267884   12926 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:46:58.268122   12926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:46:58.268131   12926 out.go:374] Setting ErrFile to fd 2...
	I1102 12:46:58.268135   12926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:46:58.268311   12926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	W1102 12:46:58.268429   12926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21808-9416/.minikube/config/config.json: open /home/jenkins/minikube-integration/21808-9416/.minikube/config/config.json: no such file or directory
	I1102 12:46:58.268875   12926 out.go:368] Setting JSON to true
	I1102 12:46:58.269731   12926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1770,"bootTime":1762085848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:46:58.269815   12926 start.go:143] virtualization: kvm guest
	I1102 12:46:58.271956   12926 out.go:99] [download-only-793938] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1102 12:46:58.272052   12926 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball: no such file or directory
	I1102 12:46:58.272085   12926 notify.go:221] Checking for updates...
	I1102 12:46:58.273270   12926 out.go:171] MINIKUBE_LOCATION=21808
	I1102 12:46:58.274443   12926 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:46:58.275744   12926 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:46:58.276970   12926 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:46:58.280803   12926 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1102 12:46:58.282758   12926 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1102 12:46:58.282993   12926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:46:58.305855   12926 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:46:58.305966   12926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:46:58.717589   12926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-02 12:46:58.70713466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:46:58.717743   12926 docker.go:319] overlay module found
	I1102 12:46:58.719486   12926 out.go:99] Using the docker driver based on user configuration
	I1102 12:46:58.719511   12926 start.go:309] selected driver: docker
	I1102 12:46:58.719517   12926 start.go:930] validating driver "docker" against <nil>
	I1102 12:46:58.719625   12926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:46:58.779720   12926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-02 12:46:58.770316725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:46:58.779907   12926 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 12:46:58.780737   12926 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1102 12:46:58.780962   12926 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 12:46:58.782628   12926 out.go:171] Using Docker driver with root privileges
	I1102 12:46:58.783911   12926 cni.go:84] Creating CNI manager for ""
	I1102 12:46:58.783972   12926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 12:46:58.783983   12926 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 12:46:58.784056   12926 start.go:353] cluster config:
	{Name:download-only-793938 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-793938 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:46:58.785448   12926 out.go:99] Starting "download-only-793938" primary control-plane node in "download-only-793938" cluster
	I1102 12:46:58.785465   12926 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 12:46:58.786681   12926 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1102 12:46:58.786706   12926 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 12:46:58.786834   12926 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 12:46:58.804351   12926 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 12:46:58.804540   12926 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 12:46:58.804643   12926 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 12:46:58.805786   12926 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1102 12:46:58.805806   12926 cache.go:59] Caching tarball of preloaded images
	I1102 12:46:58.805902   12926 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 12:46:58.807753   12926 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1102 12:46:58.807773   12926 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1102 12:46:58.834303   12926 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1102 12:46:58.834430   12926 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1102 12:47:01.949275   12926 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1102 12:47:01.949741   12926 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/download-only-793938/config.json ...
	I1102 12:47:01.949789   12926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/download-only-793938/config.json: {Name:mk24816353ebba0dee74e6b0a99f6cbfd3739e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 12:47:01.949997   12926 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 12:47:01.950232   12926 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-9416/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-793938 host does not exist
	  To start a cluster, run: "minikube start -p download-only-793938"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-793938
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-537260 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-537260 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.260685053s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1102 12:47:08.152557   12914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1102 12:47:08.152619   12914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-9416/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-537260
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-537260: exit status 85 (77.064164ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-793938 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-793938 │ jenkins │ v1.37.0 │ 02 Nov 25 12:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ delete  │ -p download-only-793938                                                                                                                                                   │ download-only-793938 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │ 02 Nov 25 12:47 UTC │
	│ start   │ -o=json --download-only -p download-only-537260 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-537260 │ jenkins │ v1.37.0 │ 02 Nov 25 12:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 12:47:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 12:47:03.942935   13279 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:47:03.943237   13279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:03.943249   13279 out.go:374] Setting ErrFile to fd 2...
	I1102 12:47:03.943255   13279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:47:03.943442   13279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:47:03.943932   13279 out.go:368] Setting JSON to true
	I1102 12:47:03.944759   13279 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1776,"bootTime":1762085848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:47:03.944845   13279 start.go:143] virtualization: kvm guest
	I1102 12:47:03.947098   13279 out.go:99] [download-only-537260] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 12:47:03.947270   13279 notify.go:221] Checking for updates...
	I1102 12:47:03.948790   13279 out.go:171] MINIKUBE_LOCATION=21808
	I1102 12:47:03.950019   13279 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:47:03.951314   13279 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:47:03.952591   13279 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:47:03.953799   13279 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1102 12:47:03.956209   13279 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1102 12:47:03.956476   13279 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:47:03.980883   13279 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:47:03.980962   13279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:04.036160   13279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-02 12:47:04.026932624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:04.036254   13279 docker.go:319] overlay module found
	I1102 12:47:04.038178   13279 out.go:99] Using the docker driver based on user configuration
	I1102 12:47:04.038218   13279 start.go:309] selected driver: docker
	I1102 12:47:04.038226   13279 start.go:930] validating driver "docker" against <nil>
	I1102 12:47:04.038311   13279 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:47:04.093559   13279 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-02 12:47:04.084281851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:47:04.093777   13279 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 12:47:04.094443   13279 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1102 12:47:04.094647   13279 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 12:47:04.096409   13279 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-537260 host does not exist
	  To start a cluster, run: "minikube start -p download-only-537260"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-537260
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-117507 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-117507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-117507
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1102 12:47:09.317173   12914 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-594380 --alsologtostderr --binary-mirror http://127.0.0.1:44217 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-594380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-594380
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (53.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-063012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-063012 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.711967921s)
helpers_test.go:175: Cleaning up "offline-crio-063012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-063012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-063012: (2.687434296s)
--- PASS: TestOffline (53.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-341255
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-341255: exit status 85 (59.784802ms)

                                                
                                                
-- stdout --
	* Profile "addons-341255" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-341255"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-341255
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-341255: exit status 85 (60.953809ms)

                                                
                                                
-- stdout --
	* Profile "addons-341255" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-341255"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (163.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-341255 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-341255 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.866768264s)
--- PASS: TestAddons/Setup (163.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-341255 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-341255 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-341255 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-341255 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4bcca581-17a7-4233-ac03-1874944a76d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4bcca581-17a7-4233-ac03-1874944a76d9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00329836s
addons_test.go:694: (dbg) Run:  kubectl --context addons-341255 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-341255 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-341255 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-341255
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-341255: (18.658480232s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-341255
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-341255
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-341255
--- PASS: TestAddons/StoppedEnableDisable (18.93s)

                                                
                                    
x
+
TestCertOptions (31.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-514605 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-514605 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.610476374s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-514605 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-514605 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-514605 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-514605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-514605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-514605: (5.352941024s)
--- PASS: TestCertOptions (31.77s)

                                                
                                    
x
+
TestCertExpiration (224.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110310 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110310 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.182549109s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110310 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110310 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.726240833s)
helpers_test.go:175: Cleaning up "cert-expiration-110310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-110310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-110310: (2.74050217s)
--- PASS: TestCertExpiration (224.65s)

                                                
                                    
x
+
TestForceSystemdFlag (25.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-600209 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-600209 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.23274412s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-600209 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-600209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-600209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-600209: (3.134880173s)
--- PASS: TestForceSystemdFlag (25.70s)

                                                
                                    
x
+
TestForceSystemdEnv (33.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-091295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-091295 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.333527407s)
helpers_test.go:175: Cleaning up "force-systemd-env-091295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-091295
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-091295: (2.587474954s)
--- PASS: TestForceSystemdEnv (33.92s)

                                                
                                    
x
+
TestErrorSpam/setup (19.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-372665 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-372665 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-372665 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-372665 --driver=docker  --container-runtime=crio: (19.473359202s)
--- PASS: TestErrorSpam/setup (19.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (5.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause: exit status 80 (1.499095724s)

                                                
                                                
-- stdout --
	* Pausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause: exit status 80 (1.873905352s)

                                                
                                                
-- stdout --
	* Pausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause: exit status 80 (2.027718968s)

                                                
                                                
-- stdout --
	* Pausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.22s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause: exit status 80 (1.835665917s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause: exit status 80 (2.127184137s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause: exit status 80 (2.252844613s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-372665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T12:53:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.22s)

                                                
                                    
x
+
TestErrorSpam/stop (8.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 stop: (7.900184194s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-372665 --log_dir /tmp/nospam-372665 stop
--- PASS: TestErrorSpam/stop (8.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-9416/.minikube/files/etc/test/nested/copy/12914/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-529076 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.99141019s)
--- PASS: TestFunctional/serial/StartWithProxy (38.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1102 12:54:23.053974   12914 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-529076 --alsologtostderr -v=8: (6.472706947s)
functional_test.go:678: soft start took 6.473433402s for "functional-529076" cluster.
I1102 12:54:29.527053   12914 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-529076 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-529076 /tmp/TestFunctionalserialCacheCmdcacheadd_local3880025103/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache add minikube-local-cache-test:functional-529076
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache delete minikube-local-cache-test:functional-529076
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-529076
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.695893ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 kubectl -- --context functional-529076 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-529076 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1102 12:54:54.695743   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:54.702135   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:54.713502   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:54.734856   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:54.776212   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:54.857651   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:55.019180   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:55.340794   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:55.982745   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:57.264763   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:54:59.826465   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:55:04.947950   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:55:15.190148   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:55:35.671557   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-529076 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.230818756s)
functional_test.go:776: restart took 1m2.230947357s for "functional-529076" cluster.
I1102 12:55:37.518334   12914 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (62.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-529076 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 logs: (1.235291371s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 logs --file /tmp/TestFunctionalserialLogsFileCmd2651210806/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 logs --file /tmp/TestFunctionalserialLogsFileCmd2651210806/001/logs.txt: (1.281251779s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-529076 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-529076
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-529076: exit status 115 (344.024522ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30918 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-529076 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 config get cpus: exit status 14 (78.373368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 config get cpus: exit status 14 (88.760875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-529076 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-529076 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51904: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-529076 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.187679ms)

                                                
                                                
-- stdout --
	* [functional-529076] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:56:08.440326   51417 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:56:08.440871   51417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:56:08.440885   51417 out.go:374] Setting ErrFile to fd 2...
	I1102 12:56:08.440892   51417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:56:08.441272   51417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:56:08.441888   51417 out.go:368] Setting JSON to false
	I1102 12:56:08.443081   51417 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2320,"bootTime":1762085848,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:56:08.443159   51417 start.go:143] virtualization: kvm guest
	I1102 12:56:08.445138   51417 out.go:179] * [functional-529076] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 12:56:08.446727   51417 notify.go:221] Checking for updates...
	I1102 12:56:08.446735   51417 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 12:56:08.448171   51417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:56:08.449769   51417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:56:08.451328   51417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:56:08.455125   51417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 12:56:08.456523   51417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 12:56:08.458138   51417 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:56:08.458753   51417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:56:08.488600   51417 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:56:08.488695   51417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:56:08.561096   51417 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-02 12:56:08.548854785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:56:08.561245   51417 docker.go:319] overlay module found
	I1102 12:56:08.563789   51417 out.go:179] * Using the docker driver based on existing profile
	I1102 12:56:08.565814   51417 start.go:309] selected driver: docker
	I1102 12:56:08.565829   51417 start.go:930] validating driver "docker" against &{Name:functional-529076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-529076 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:56:08.565952   51417 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 12:56:08.568124   51417 out.go:203] 
	W1102 12:56:08.569276   51417 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1102 12:56:08.570757   51417 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-529076 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-529076 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.601968ms)

                                                
                                                
-- stdout --
	* [functional-529076] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 12:55:55.002252   47761 out.go:360] Setting OutFile to fd 1 ...
	I1102 12:55:55.002496   47761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:55:55.002504   47761 out.go:374] Setting ErrFile to fd 2...
	I1102 12:55:55.002508   47761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 12:55:55.002868   47761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 12:55:55.003306   47761 out.go:368] Setting JSON to false
	I1102 12:55:55.004166   47761 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2307,"bootTime":1762085848,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 12:55:55.004261   47761 start.go:143] virtualization: kvm guest
	I1102 12:55:55.006227   47761 out.go:179] * [functional-529076] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1102 12:55:55.007503   47761 notify.go:221] Checking for updates...
	I1102 12:55:55.007535   47761 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 12:55:55.008817   47761 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 12:55:55.010319   47761 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 12:55:55.011807   47761 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 12:55:55.013092   47761 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 12:55:55.014426   47761 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 12:55:55.016112   47761 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 12:55:55.016654   47761 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 12:55:55.040580   47761 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 12:55:55.040698   47761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 12:55:55.098233   47761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-02 12:55:55.087601464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 12:55:55.098345   47761 docker.go:319] overlay module found
	I1102 12:55:55.099971   47761 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1102 12:55:55.101194   47761 start.go:309] selected driver: docker
	I1102 12:55:55.101208   47761 start.go:930] validating driver "docker" against &{Name:functional-529076 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-529076 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 12:55:55.101309   47761 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 12:55:55.103101   47761 out.go:203] 
	W1102 12:55:55.104342   47761 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1102 12:55:55.105558   47761 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [cac78b3f-7d81-422e-8250-3a4adcee2d30] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003460577s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-529076 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-529076 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-529076 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-529076 apply -f testdata/storage-provisioner/pod.yaml
I1102 12:55:51.514838   12914 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8d961cf0-0c5e-45f3-969f-8261e2326deb] Pending
helpers_test.go:352: "sp-pod" [8d961cf0-0c5e-45f3-969f-8261e2326deb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8d961cf0-0c5e-45f3-969f-8261e2326deb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003775869s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-529076 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-529076 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-529076 apply -f testdata/storage-provisioner/pod.yaml
I1102 12:56:00.369206   12914 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [604880c3-e0a7-4667-8c7b-75b6d1974403] Pending
helpers_test.go:352: "sp-pod" [604880c3-e0a7-4667-8c7b-75b6d1974403] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003967086s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-529076 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh -n functional-529076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cp functional-529076:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1397805226/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh -n functional-529076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh -n functional-529076 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (14.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-529076 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hd752" [8cd79793-dad3-4bea-8671-6adb423977ca] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-hd752" [8cd79793-dad3-4bea-8671-6adb423977ca] Running
E1102 12:56:16.633680   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2025/11/02 12:56:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.003952078s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-529076 exec mysql-5bb876957f-hd752 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-529076 exec mysql-5bb876957f-hd752 -- mysql -ppassword -e "show databases;": exit status 1 (93.905123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1102 12:56:18.792882   12914 retry.go:31] will retry after 822.084445ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-529076 exec mysql-5bb876957f-hd752 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-529076 exec mysql-5bb876957f-hd752 -- mysql -ppassword -e "show databases;": exit status 1 (95.14929ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1102 12:56:19.710482   12914 retry.go:31] will retry after 1.681281079s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-529076 exec mysql-5bb876957f-hd752 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (14.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12914/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /etc/test/nested/copy/12914/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12914.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /etc/ssl/certs/12914.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12914.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /usr/share/ca-certificates/12914.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/129142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /etc/ssl/certs/129142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/129142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /usr/share/ca-certificates/129142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-529076 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "sudo systemctl is-active docker": exit status 1 (333.722292ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "sudo systemctl is-active containerd": exit status 1 (324.311589ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-529076 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-529076 image ls --format short --alsologtostderr:
I1102 12:56:20.240991   53199 out.go:360] Setting OutFile to fd 1 ...
I1102 12:56:20.241279   53199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.241290   53199 out.go:374] Setting ErrFile to fd 2...
I1102 12:56:20.241294   53199 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.241498   53199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
I1102 12:56:20.242011   53199 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.242110   53199 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.242465   53199 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
I1102 12:56:20.260088   53199 ssh_runner.go:195] Run: systemctl --version
I1102 12:56:20.260129   53199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
I1102 12:56:20.277754   53199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
I1102 12:56:20.375495   53199 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-529076 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-529076 image ls --format table --alsologtostderr:
I1102 12:56:21.752639   53600 out.go:360] Setting OutFile to fd 1 ...
I1102 12:56:21.753050   53600 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:21.753063   53600 out.go:374] Setting ErrFile to fd 2...
I1102 12:56:21.753070   53600 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:21.753540   53600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
I1102 12:56:21.754593   53600 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:21.754689   53600 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:21.755063   53600 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
I1102 12:56:21.772617   53600 ssh_runner.go:195] Run: systemctl --version
I1102 12:56:21.772670   53600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
I1102 12:56:21.790125   53600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
I1102 12:56:21.888171   53600 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-529076 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0
902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1
748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":
"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d79
51b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"52
546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause
:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-529076 image ls --format json --alsologtostderr:
I1102 12:56:21.530881   53530 out.go:360] Setting OutFile to fd 1 ...
I1102 12:56:21.531004   53530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:21.531013   53530 out.go:374] Setting ErrFile to fd 2...
I1102 12:56:21.531017   53530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:21.531220   53530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
I1102 12:56:21.531790   53530 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:21.531884   53530 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:21.532273   53530 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
I1102 12:56:21.550232   53530 ssh_runner.go:195] Run: systemctl --version
I1102 12:56:21.550277   53530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
I1102 12:56:21.567823   53530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
I1102 12:56:21.666196   53530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-529076 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-529076 image ls --format yaml --alsologtostderr:
I1102 12:56:20.463032   53253 out.go:360] Setting OutFile to fd 1 ...
I1102 12:56:20.463286   53253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.463295   53253 out.go:374] Setting ErrFile to fd 2...
I1102 12:56:20.463299   53253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.463502   53253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
I1102 12:56:20.464052   53253 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.464139   53253 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.464519   53253 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
I1102 12:56:20.482158   53253 ssh_runner.go:195] Run: systemctl --version
I1102 12:56:20.482205   53253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
I1102 12:56:20.499130   53253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
I1102 12:56:20.597205   53253 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh pgrep buildkitd: exit status 1 (267.346315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image build -t localhost/my-image:functional-529076 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 image build -t localhost/my-image:functional-529076 testdata/build --alsologtostderr: (2.299680771s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-529076 image build -t localhost/my-image:functional-529076 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b4eebb3c6a3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-529076
--> 02acf8041bb
Successfully tagged localhost/my-image:functional-529076
02acf8041bb7e3c258d888ecde7ac07818af73761ab10245a797d3d08e7c5c63
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-529076 image build -t localhost/my-image:functional-529076 testdata/build --alsologtostderr:
I1102 12:56:20.948900   53415 out.go:360] Setting OutFile to fd 1 ...
I1102 12:56:20.949174   53415 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.949183   53415 out.go:374] Setting ErrFile to fd 2...
I1102 12:56:20.949187   53415 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 12:56:20.949362   53415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
I1102 12:56:20.949904   53415 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.950556   53415 config.go:182] Loaded profile config "functional-529076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 12:56:20.950912   53415 cli_runner.go:164] Run: docker container inspect functional-529076 --format={{.State.Status}}
I1102 12:56:20.968120   53415 ssh_runner.go:195] Run: systemctl --version
I1102 12:56:20.968160   53415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529076
I1102 12:56:20.986482   53415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/functional-529076/id_rsa Username:docker}
I1102 12:56:21.084084   53415 build_images.go:162] Building image from path: /tmp/build.1852992984.tar
I1102 12:56:21.084174   53415 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1102 12:56:21.091824   53415 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1852992984.tar
I1102 12:56:21.095272   53415 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1852992984.tar: stat -c "%s %y" /var/lib/minikube/build/build.1852992984.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1852992984.tar': No such file or directory
I1102 12:56:21.095314   53415 ssh_runner.go:362] scp /tmp/build.1852992984.tar --> /var/lib/minikube/build/build.1852992984.tar (3072 bytes)
I1102 12:56:21.112720   53415 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1852992984
I1102 12:56:21.120157   53415 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1852992984 -xf /var/lib/minikube/build/build.1852992984.tar
I1102 12:56:21.127766   53415 crio.go:315] Building image: /var/lib/minikube/build/build.1852992984
I1102 12:56:21.127832   53415 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-529076 /var/lib/minikube/build/build.1852992984 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1102 12:56:23.174287   53415 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-529076 /var/lib/minikube/build/build.1852992984 --cgroup-manager=cgroupfs: (2.046421033s)
I1102 12:56:23.174346   53415 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1852992984
I1102 12:56:23.182922   53415 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1852992984.tar
I1102 12:56:23.190855   53415 build_images.go:218] Built localhost/my-image:functional-529076 from /tmp/build.1852992984.tar
I1102 12:56:23.190887   53415 build_images.go:134] succeeded building to: functional-529076
I1102 12:56:23.190893   53415 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls
E1102 12:57:38.555302   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 12:59:54.688001   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:00:22.397509   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:04:54.687123   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-529076
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "389.152473ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "77.531627ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 46154: os: process already finished
helpers_test.go:519: unable to terminate pid 45873: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "388.266539ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "75.422878ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-529076 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e58365c4-b8a8-446a-8d64-c89c18f9412c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e58365c4-b8a8-446a-8d64-c89c18f9412c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00277294s
I1102 12:55:54.783020   12914 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image rm kicbase/echo-server:functional-529076 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-529076 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.18.29 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-529076 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdany-port1929917225/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762088155111415680" to /tmp/TestFunctionalparallelMountCmdany-port1929917225/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762088155111415680" to /tmp/TestFunctionalparallelMountCmdany-port1929917225/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762088155111415680" to /tmp/TestFunctionalparallelMountCmdany-port1929917225/001/test-1762088155111415680
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.014518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1102 12:55:55.396738   12914 retry.go:31] will retry after 523.844229ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  2 12:55 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  2 12:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  2 12:55 test-1762088155111415680
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh cat /mount-9p/test-1762088155111415680
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-529076 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e58211d0-8c11-48be-966a-29a0b97ecd12] Pending
helpers_test.go:352: "busybox-mount" [e58211d0-8c11-48be-966a-29a0b97ecd12] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e58211d0-8c11-48be-966a-29a0b97ecd12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e58211d0-8c11-48be-966a-29a0b97ecd12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003306868s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-529076 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdany-port1929917225/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdspecific-port3106994727/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.92747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1102 12:56:02.135602   12914 retry.go:31] will retry after 320.992595ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdspecific-port3106994727/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "sudo umount -f /mount-9p": exit status 1 (267.70179ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-529076 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdspecific-port3106994727/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T" /mount1: exit status 1 (345.632346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1102 12:56:03.820986   12914 retry.go:31] will retry after 616.656238ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-529076 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-529076 /tmp/TestFunctionalparallelMountCmdVerifyCleanup843855052/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 service list: (1.700109188s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-529076 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-529076 service list -o json: (1.698857327s)
functional_test.go:1504: Took "1.698950098s" to run "out/minikube-linux-amd64 -p functional-529076 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-529076
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-529076
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-529076
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m43.593492488s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (104.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 kubectl -- rollout status deployment/busybox: (2.810472731s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-52p22 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-kpf5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-nc8vh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-52p22 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-kpf5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-nc8vh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-52p22 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-kpf5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-nc8vh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-52p22 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-52p22 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-kpf5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-kpf5m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-nc8vh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 kubectl -- exec busybox-7b57f96db7-nc8vh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 node add --alsologtostderr -v 5: (23.971211886s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-881096 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp testdata/cp-test.txt ha-881096:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1337554470/001/cp-test_ha-881096.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096:/home/docker/cp-test.txt ha-881096-m02:/home/docker/cp-test_ha-881096_ha-881096-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test_ha-881096_ha-881096-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096:/home/docker/cp-test.txt ha-881096-m03:/home/docker/cp-test_ha-881096_ha-881096-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test_ha-881096_ha-881096-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096:/home/docker/cp-test.txt ha-881096-m04:/home/docker/cp-test_ha-881096_ha-881096-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test_ha-881096_ha-881096-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp testdata/cp-test.txt ha-881096-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1337554470/001/cp-test_ha-881096-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m02:/home/docker/cp-test.txt ha-881096:/home/docker/cp-test_ha-881096-m02_ha-881096.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test_ha-881096-m02_ha-881096.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m02:/home/docker/cp-test.txt ha-881096-m03:/home/docker/cp-test_ha-881096-m02_ha-881096-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test_ha-881096-m02_ha-881096-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m02:/home/docker/cp-test.txt ha-881096-m04:/home/docker/cp-test_ha-881096-m02_ha-881096-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test_ha-881096-m02_ha-881096-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp testdata/cp-test.txt ha-881096-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1337554470/001/cp-test_ha-881096-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m03:/home/docker/cp-test.txt ha-881096:/home/docker/cp-test_ha-881096-m03_ha-881096.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test_ha-881096-m03_ha-881096.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m03:/home/docker/cp-test.txt ha-881096-m02:/home/docker/cp-test_ha-881096-m03_ha-881096-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test_ha-881096-m03_ha-881096-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m03:/home/docker/cp-test.txt ha-881096-m04:/home/docker/cp-test_ha-881096-m03_ha-881096-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test_ha-881096-m03_ha-881096-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp testdata/cp-test.txt ha-881096-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1337554470/001/cp-test_ha-881096-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m04:/home/docker/cp-test.txt ha-881096:/home/docker/cp-test_ha-881096-m04_ha-881096.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096 "sudo cat /home/docker/cp-test_ha-881096-m04_ha-881096.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m04:/home/docker/cp-test.txt ha-881096-m02:/home/docker/cp-test_ha-881096-m04_ha-881096-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m02 "sudo cat /home/docker/cp-test_ha-881096-m04_ha-881096-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 cp ha-881096-m04:/home/docker/cp-test.txt ha-881096-m03:/home/docker/cp-test_ha-881096-m04_ha-881096-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 ssh -n ha-881096-m03 "sudo cat /home/docker/cp-test_ha-881096-m04_ha-881096-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 node stop m02 --alsologtostderr -v 5: (12.568654928s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5: exit status 7 (691.295551ms)

                                                
                                                
-- stdout --
	ha-881096
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-881096-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881096-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-881096-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:08:46.204008   77923 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:08:46.204246   77923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:08:46.204254   77923 out.go:374] Setting ErrFile to fd 2...
	I1102 13:08:46.204258   77923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:08:46.204473   77923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:08:46.204654   77923 out.go:368] Setting JSON to false
	I1102 13:08:46.204677   77923 mustload.go:66] Loading cluster: ha-881096
	I1102 13:08:46.204782   77923 notify.go:221] Checking for updates...
	I1102 13:08:46.205004   77923 config.go:182] Loaded profile config "ha-881096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:08:46.205022   77923 status.go:174] checking status of ha-881096 ...
	I1102 13:08:46.205472   77923 cli_runner.go:164] Run: docker container inspect ha-881096 --format={{.State.Status}}
	I1102 13:08:46.224622   77923 status.go:371] ha-881096 host status = "Running" (err=<nil>)
	I1102 13:08:46.224645   77923 host.go:66] Checking if "ha-881096" exists ...
	I1102 13:08:46.224881   77923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881096
	I1102 13:08:46.244225   77923 host.go:66] Checking if "ha-881096" exists ...
	I1102 13:08:46.244645   77923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:08:46.244696   77923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881096
	I1102 13:08:46.263227   77923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/ha-881096/id_rsa Username:docker}
	I1102 13:08:46.361091   77923 ssh_runner.go:195] Run: systemctl --version
	I1102 13:08:46.367393   77923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:08:46.379250   77923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:08:46.433948   77923 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-02 13:08:46.424601506 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:08:46.434752   77923 kubeconfig.go:125] found "ha-881096" server: "https://192.168.49.254:8443"
	I1102 13:08:46.434788   77923 api_server.go:166] Checking apiserver status ...
	I1102 13:08:46.434826   77923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:08:46.446593   77923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1281/cgroup
	W1102 13:08:46.455013   77923 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1281/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:08:46.455095   77923 ssh_runner.go:195] Run: ls
	I1102 13:08:46.458802   77923 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1102 13:08:46.464088   77923 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1102 13:08:46.464109   77923 status.go:463] ha-881096 apiserver status = Running (err=<nil>)
	I1102 13:08:46.464119   77923 status.go:176] ha-881096 status: &{Name:ha-881096 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:08:46.464133   77923 status.go:174] checking status of ha-881096-m02 ...
	I1102 13:08:46.464352   77923 cli_runner.go:164] Run: docker container inspect ha-881096-m02 --format={{.State.Status}}
	I1102 13:08:46.482426   77923 status.go:371] ha-881096-m02 host status = "Stopped" (err=<nil>)
	I1102 13:08:46.482446   77923 status.go:384] host is not running, skipping remaining checks
	I1102 13:08:46.482452   77923 status.go:176] ha-881096-m02 status: &{Name:ha-881096-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:08:46.482482   77923 status.go:174] checking status of ha-881096-m03 ...
	I1102 13:08:46.482762   77923 cli_runner.go:164] Run: docker container inspect ha-881096-m03 --format={{.State.Status}}
	I1102 13:08:46.499250   77923 status.go:371] ha-881096-m03 host status = "Running" (err=<nil>)
	I1102 13:08:46.499274   77923 host.go:66] Checking if "ha-881096-m03" exists ...
	I1102 13:08:46.499498   77923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881096-m03
	I1102 13:08:46.516978   77923 host.go:66] Checking if "ha-881096-m03" exists ...
	I1102 13:08:46.517264   77923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:08:46.517309   77923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881096-m03
	I1102 13:08:46.534104   77923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/ha-881096-m03/id_rsa Username:docker}
	I1102 13:08:46.630716   77923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:08:46.643753   77923 kubeconfig.go:125] found "ha-881096" server: "https://192.168.49.254:8443"
	I1102 13:08:46.643780   77923 api_server.go:166] Checking apiserver status ...
	I1102 13:08:46.643820   77923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:08:46.654155   77923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	W1102 13:08:46.662206   77923 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:08:46.662249   77923 ssh_runner.go:195] Run: ls
	I1102 13:08:46.665812   77923 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1102 13:08:46.670116   77923 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1102 13:08:46.670137   77923 status.go:463] ha-881096-m03 apiserver status = Running (err=<nil>)
	I1102 13:08:46.670146   77923 status.go:176] ha-881096-m03 status: &{Name:ha-881096-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:08:46.670172   77923 status.go:174] checking status of ha-881096-m04 ...
	I1102 13:08:46.670405   77923 cli_runner.go:164] Run: docker container inspect ha-881096-m04 --format={{.State.Status}}
	I1102 13:08:46.688137   77923 status.go:371] ha-881096-m04 host status = "Running" (err=<nil>)
	I1102 13:08:46.688162   77923 host.go:66] Checking if "ha-881096-m04" exists ...
	I1102 13:08:46.688419   77923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881096-m04
	I1102 13:08:46.705681   77923 host.go:66] Checking if "ha-881096-m04" exists ...
	I1102 13:08:46.705969   77923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:08:46.706019   77923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881096-m04
	I1102 13:08:46.722962   77923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/ha-881096-m04/id_rsa Username:docker}
	I1102 13:08:46.820787   77923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:08:46.832864   77923 status.go:176] ha-881096-m04 status: &{Name:ha-881096-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 node start m02 --alsologtostderr -v 5: (8.119115353s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 stop --alsologtostderr -v 5: (50.847050867s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 start --wait true --alsologtostderr -v 5
E1102 13:09:54.687763   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.148973   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.155375   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.166727   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.188107   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.229519   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.310967   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.472463   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:45.794165   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:46.436227   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:47.717787   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:50.280680   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:10:55.401988   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:11:05.643871   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:11:17.760558   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:11:26.125393   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:12:07.087709   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 start --wait true --alsologtostderr -v 5: (2m27.181328835s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 node delete m03 --alsologtostderr -v 5: (9.306937927s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 stop --alsologtostderr -v 5: (47.163544142s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5: exit status 7 (112.444684ms)

                                                
                                                
-- stdout --
	ha-881096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881096-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881096-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:13:13.659854   92546 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:13:13.660108   92546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:13.660116   92546 out.go:374] Setting ErrFile to fd 2...
	I1102 13:13:13.660120   92546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:13.660305   92546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:13:13.660485   92546 out.go:368] Setting JSON to false
	I1102 13:13:13.660506   92546 mustload.go:66] Loading cluster: ha-881096
	I1102 13:13:13.660535   92546 notify.go:221] Checking for updates...
	I1102 13:13:13.660879   92546 config.go:182] Loaded profile config "ha-881096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:13:13.660899   92546 status.go:174] checking status of ha-881096 ...
	I1102 13:13:13.661306   92546 cli_runner.go:164] Run: docker container inspect ha-881096 --format={{.State.Status}}
	I1102 13:13:13.680668   92546 status.go:371] ha-881096 host status = "Stopped" (err=<nil>)
	I1102 13:13:13.680690   92546 status.go:384] host is not running, skipping remaining checks
	I1102 13:13:13.680698   92546 status.go:176] ha-881096 status: &{Name:ha-881096 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:13:13.680724   92546 status.go:174] checking status of ha-881096-m02 ...
	I1102 13:13:13.680965   92546 cli_runner.go:164] Run: docker container inspect ha-881096-m02 --format={{.State.Status}}
	I1102 13:13:13.698403   92546 status.go:371] ha-881096-m02 host status = "Stopped" (err=<nil>)
	I1102 13:13:13.698429   92546 status.go:384] host is not running, skipping remaining checks
	I1102 13:13:13.698437   92546 status.go:176] ha-881096-m02 status: &{Name:ha-881096-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:13:13.698472   92546 status.go:174] checking status of ha-881096-m04 ...
	I1102 13:13:13.698746   92546 cli_runner.go:164] Run: docker container inspect ha-881096-m04 --format={{.State.Status}}
	I1102 13:13:13.716037   92546 status.go:371] ha-881096-m04 host status = "Stopped" (err=<nil>)
	I1102 13:13:13.716090   92546 status.go:384] host is not running, skipping remaining checks
	I1102 13:13:13.716104   92546 status.go:176] ha-881096-m04 status: &{Name:ha-881096-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1102 13:13:29.009998   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.527635469s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (65.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 node add --control-plane --alsologtostderr -v 5
E1102 13:14:54.687665   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-881096 node add --control-plane --alsologtostderr -v 5: (1m4.49537434s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-881096 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (65.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-441451 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1102 13:15:45.149131   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-441451 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.266821908s)
--- PASS: TestJSONOutput/start/Command (39.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-441451 --output=json --user=testUser
E1102 13:16:12.854209   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-441451 --output=json --user=testUser: (7.989400736s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-498134 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-498134 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.074631ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82772e1b-2e52-4837-a538-f634b397383b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-498134] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e35b4d1-a37e-47f5-9b26-dea3e9411116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"f52ec3f9-85e9-419b-9c43-9564191afcce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"821401d5-02b9-4247-8dc5-6330119756e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig"}}
	{"specversion":"1.0","id":"bf080839-4a6c-491e-8bde-e5104253d111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube"}}
	{"specversion":"1.0","id":"692cc9ad-3176-49d1-95fc-47183377b5e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"303d8da9-a7bb-46e4-a6ef-21aa52db667f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a831c198-e0e7-4c5c-b79e-4ca03993ad11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-498134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-498134
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-460520 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-460520 --network=: (28.895933342s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-460520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-460520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-460520: (2.159168522s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-973818 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-973818 --network=bridge: (25.041881663s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-973818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-973818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-973818: (1.982333689s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.04s)

                                                
                                    
x
+
TestKicExistingNetwork (28.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1102 13:17:19.889182   12914 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1102 13:17:19.905771   12914 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1102 13:17:19.905860   12914 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1102 13:17:19.905881   12914 cli_runner.go:164] Run: docker network inspect existing-network
W1102 13:17:19.923645   12914 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1102 13:17:19.923686   12914 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1102 13:17:19.923702   12914 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1102 13:17:19.923803   12914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1102 13:17:19.941887   12914 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9493238624b4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ea:ff:51:3e:e4:f4} reservation:<nil>}
I1102 13:17:19.942197   12914 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000128c0}
I1102 13:17:19.942233   12914 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1102 13:17:19.942283   12914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1102 13:17:19.998054   12914 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-076591 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-076591 --network=existing-network: (26.406261937s)
helpers_test.go:175: Cleaning up "existing-network-076591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-076591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-076591: (1.96120536s)
I1102 13:17:48.383071   12914 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (28.51s)

                                                
                                    
x
+
TestKicCustomSubnet (24.1s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-052938 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-052938 --subnet=192.168.60.0/24: (21.93326015s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-052938 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-052938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-052938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-052938: (2.150762097s)
--- PASS: TestKicCustomSubnet (24.10s)

                                                
                                    
x
+
TestKicStaticIP (26.29s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-805573 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-805573 --static-ip=192.168.200.200: (23.99278068s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-805573 ip
helpers_test.go:175: Cleaning up "static-ip-805573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-805573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-805573: (2.150360291s)
--- PASS: TestKicStaticIP (26.29s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-687353 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-687353 --driver=docker  --container-runtime=crio: (21.544059378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-690571 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-690571 --driver=docker  --container-runtime=crio: (20.842331232s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-687353
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-690571
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-690571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-690571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-690571: (2.399997821s)
helpers_test.go:175: Cleaning up "first-687353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-687353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-687353: (2.354888943s)
--- PASS: TestMinikubeProfile (48.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-715955 --memory=3072 --mount-string /tmp/TestMountStartserial1936699078/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-715955 --memory=3072 --mount-string /tmp/TestMountStartserial1936699078/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.625287187s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715955 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-729952 --memory=3072 --mount-string /tmp/TestMountStartserial1936699078/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-729952 --memory=3072 --mount-string /tmp/TestMountStartserial1936699078/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.853979286s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729952 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-715955 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-715955 --alsologtostderr -v=5: (1.711215851s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729952 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-729952
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-729952: (1.252501062s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-729952
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-729952: (6.373071999s)
--- PASS: TestMountStart/serial/RestartStopped (7.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729952 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-150085 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1102 13:19:54.687737   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:45.149434   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-150085 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.007518931s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-150085 -- rollout status deployment/busybox: (2.318368201s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-5gg47 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-pkxtw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-5gg47 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-pkxtw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-5gg47 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-pkxtw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-5gg47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-5gg47 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-pkxtw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-150085 -- exec busybox-7b57f96db7-pkxtw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-150085 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-150085 -v=5 --alsologtostderr: (54.036795631s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-150085 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp testdata/cp-test.txt multinode-150085:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile483812841/001/cp-test_multinode-150085.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085:/home/docker/cp-test.txt multinode-150085-m02:/home/docker/cp-test_multinode-150085_multinode-150085-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test_multinode-150085_multinode-150085-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085:/home/docker/cp-test.txt multinode-150085-m03:/home/docker/cp-test_multinode-150085_multinode-150085-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test_multinode-150085_multinode-150085-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp testdata/cp-test.txt multinode-150085-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile483812841/001/cp-test_multinode-150085-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m02:/home/docker/cp-test.txt multinode-150085:/home/docker/cp-test_multinode-150085-m02_multinode-150085.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test_multinode-150085-m02_multinode-150085.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m02:/home/docker/cp-test.txt multinode-150085-m03:/home/docker/cp-test_multinode-150085-m02_multinode-150085-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test_multinode-150085-m02_multinode-150085-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp testdata/cp-test.txt multinode-150085-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile483812841/001/cp-test_multinode-150085-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m03:/home/docker/cp-test.txt multinode-150085:/home/docker/cp-test_multinode-150085-m03_multinode-150085.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085 "sudo cat /home/docker/cp-test_multinode-150085-m03_multinode-150085.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 cp multinode-150085-m03:/home/docker/cp-test.txt multinode-150085-m02:/home/docker/cp-test_multinode-150085-m03_multinode-150085-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 ssh -n multinode-150085-m02 "sudo cat /home/docker/cp-test_multinode-150085-m03_multinode-150085-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-150085 node stop m03: (1.261968684s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-150085 status: exit status 7 (491.49352ms)

                                                
                                                
-- stdout --
	multinode-150085
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-150085-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-150085-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr: exit status 7 (492.692808ms)

                                                
                                                
-- stdout --
	multinode-150085
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-150085-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-150085-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:22:03.662935  153081 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:22:03.663190  153081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:22:03.663199  153081 out.go:374] Setting ErrFile to fd 2...
	I1102 13:22:03.663203  153081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:22:03.663400  153081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:22:03.663577  153081 out.go:368] Setting JSON to false
	I1102 13:22:03.663598  153081 mustload.go:66] Loading cluster: multinode-150085
	I1102 13:22:03.663638  153081 notify.go:221] Checking for updates...
	I1102 13:22:03.663948  153081 config.go:182] Loaded profile config "multinode-150085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:22:03.663965  153081 status.go:174] checking status of multinode-150085 ...
	I1102 13:22:03.664394  153081 cli_runner.go:164] Run: docker container inspect multinode-150085 --format={{.State.Status}}
	I1102 13:22:03.684026  153081 status.go:371] multinode-150085 host status = "Running" (err=<nil>)
	I1102 13:22:03.684053  153081 host.go:66] Checking if "multinode-150085" exists ...
	I1102 13:22:03.684332  153081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-150085
	I1102 13:22:03.701635  153081 host.go:66] Checking if "multinode-150085" exists ...
	I1102 13:22:03.701960  153081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:22:03.702000  153081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-150085
	I1102 13:22:03.719733  153081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32905 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/multinode-150085/id_rsa Username:docker}
	I1102 13:22:03.816797  153081 ssh_runner.go:195] Run: systemctl --version
	I1102 13:22:03.822744  153081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:22:03.834270  153081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:22:03.890520  153081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-02 13:22:03.879359394 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:22:03.891089  153081 kubeconfig.go:125] found "multinode-150085" server: "https://192.168.67.2:8443"
	I1102 13:22:03.891114  153081 api_server.go:166] Checking apiserver status ...
	I1102 13:22:03.891157  153081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:22:03.902207  153081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	W1102 13:22:03.910251  153081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:22:03.910297  153081 ssh_runner.go:195] Run: ls
	I1102 13:22:03.913885  153081 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1102 13:22:03.918981  153081 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1102 13:22:03.919004  153081 status.go:463] multinode-150085 apiserver status = Running (err=<nil>)
	I1102 13:22:03.919016  153081 status.go:176] multinode-150085 status: &{Name:multinode-150085 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:22:03.919038  153081 status.go:174] checking status of multinode-150085-m02 ...
	I1102 13:22:03.919282  153081 cli_runner.go:164] Run: docker container inspect multinode-150085-m02 --format={{.State.Status}}
	I1102 13:22:03.936974  153081 status.go:371] multinode-150085-m02 host status = "Running" (err=<nil>)
	I1102 13:22:03.937005  153081 host.go:66] Checking if "multinode-150085-m02" exists ...
	I1102 13:22:03.937251  153081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-150085-m02
	I1102 13:22:03.953959  153081 host.go:66] Checking if "multinode-150085-m02" exists ...
	I1102 13:22:03.954202  153081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:22:03.954244  153081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-150085-m02
	I1102 13:22:03.971813  153081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/21808-9416/.minikube/machines/multinode-150085-m02/id_rsa Username:docker}
	I1102 13:22:04.067851  153081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:22:04.080067  153081 status.go:176] multinode-150085-m02 status: &{Name:multinode-150085-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:22:04.080110  153081 status.go:174] checking status of multinode-150085-m03 ...
	I1102 13:22:04.080410  153081 cli_runner.go:164] Run: docker container inspect multinode-150085-m03 --format={{.State.Status}}
	I1102 13:22:04.098257  153081 status.go:371] multinode-150085-m03 host status = "Stopped" (err=<nil>)
	I1102 13:22:04.098281  153081 status.go:384] host is not running, skipping remaining checks
	I1102 13:22:04.098289  153081 status.go:176] multinode-150085-m03 status: &{Name:multinode-150085-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-150085 node start m03 -v=5 --alsologtostderr: (6.665114334s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (68.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-150085
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-150085
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-150085: (29.434213558s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-150085 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-150085 --wait=true -v=5 --alsologtostderr: (38.947338902s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-150085
--- PASS: TestMultiNode/serial/RestartKeepsNodes (68.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-150085 node delete m03: (4.642435327s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-150085 stop: (30.154232884s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-150085 status: exit status 7 (95.95066ms)

                                                
                                                
-- stdout --
	multinode-150085
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-150085-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr: exit status 7 (94.405047ms)

                                                
                                                
-- stdout --
	multinode-150085
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-150085-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:23:55.484738  162810 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:23:55.484958  162810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:23:55.484966  162810 out.go:374] Setting ErrFile to fd 2...
	I1102 13:23:55.484969  162810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:23:55.485165  162810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:23:55.485325  162810 out.go:368] Setting JSON to false
	I1102 13:23:55.485348  162810 mustload.go:66] Loading cluster: multinode-150085
	I1102 13:23:55.485494  162810 notify.go:221] Checking for updates...
	I1102 13:23:55.485760  162810 config.go:182] Loaded profile config "multinode-150085": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:23:55.485781  162810 status.go:174] checking status of multinode-150085 ...
	I1102 13:23:55.486407  162810 cli_runner.go:164] Run: docker container inspect multinode-150085 --format={{.State.Status}}
	I1102 13:23:55.504927  162810 status.go:371] multinode-150085 host status = "Stopped" (err=<nil>)
	I1102 13:23:55.504952  162810 status.go:384] host is not running, skipping remaining checks
	I1102 13:23:55.504958  162810 status.go:176] multinode-150085 status: &{Name:multinode-150085 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:23:55.504983  162810 status.go:174] checking status of multinode-150085-m02 ...
	I1102 13:23:55.505216  162810 cli_runner.go:164] Run: docker container inspect multinode-150085-m02 --format={{.State.Status}}
	I1102 13:23:55.522386  162810 status.go:371] multinode-150085-m02 host status = "Stopped" (err=<nil>)
	I1102 13:23:55.522407  162810 status.go:384] host is not running, skipping remaining checks
	I1102 13:23:55.522413  162810 status.go:176] multinode-150085-m02 status: &{Name:multinode-150085-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-150085 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-150085 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.69496552s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-150085 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-150085
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-150085-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-150085-m02 --driver=docker  --container-runtime=crio: exit status 14 (70.325073ms)

                                                
                                                
-- stdout --
	* [multinode-150085-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-150085-m02' is duplicated with machine name 'multinode-150085-m02' in profile 'multinode-150085'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-150085-m03 --driver=docker  --container-runtime=crio
E1102 13:24:54.688806   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-150085-m03 --driver=docker  --container-runtime=crio: (20.3991128s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-150085
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-150085: exit status 80 (292.34624ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-150085 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-150085-m03 already exists in multinode-150085-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-150085-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-150085-m03: (2.375977868s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.19s)

                                                
                                    
x
+
TestPreload (103.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-238119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1102 13:25:45.148956   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-238119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.973970097s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-238119 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-238119 image pull gcr.io/k8s-minikube/busybox: (1.421761668s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-238119
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-238119: (5.828818842s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-238119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-238119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.277833799s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-238119 image list
helpers_test.go:175: Cleaning up "test-preload-238119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-238119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-238119: (2.409347171s)
--- PASS: TestPreload (103.14s)

                                                
                                    
x
+
TestScheduledStopUnix (96.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-744353 --memory=3072 --driver=docker  --container-runtime=crio
E1102 13:27:08.217941   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-744353 --memory=3072 --driver=docker  --container-runtime=crio: (20.055132822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-744353 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-744353 -n scheduled-stop-744353
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-744353 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1102 13:27:14.803524   12914 retry.go:31] will retry after 120.122µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.804688   12914 retry.go:31] will retry after 91.286µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.805815   12914 retry.go:31] will retry after 272.214µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.806920   12914 retry.go:31] will retry after 275.827µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.808053   12914 retry.go:31] will retry after 615.414µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.809177   12914 retry.go:31] will retry after 1.050057ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.810326   12914 retry.go:31] will retry after 1.049462ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.811458   12914 retry.go:31] will retry after 992.312µs: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.812610   12914 retry.go:31] will retry after 2.215328ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.815815   12914 retry.go:31] will retry after 4.682504ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.821059   12914 retry.go:31] will retry after 4.118386ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.826296   12914 retry.go:31] will retry after 11.411511ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.838531   12914 retry.go:31] will retry after 14.243424ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.853828   12914 retry.go:31] will retry after 18.027639ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.872115   12914 retry.go:31] will retry after 23.034512ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
I1102 13:27:14.895356   12914 retry.go:31] will retry after 44.249998ms: open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/scheduled-stop-744353/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-744353 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-744353 -n scheduled-stop-744353
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-744353
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-744353 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1102 13:27:57.763689   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-744353
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-744353: exit status 7 (77.544256ms)

                                                
                                                
-- stdout --
	scheduled-stop-744353
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-744353 -n scheduled-stop-744353
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-744353 -n scheduled-stop-744353: exit status 7 (78.831449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-744353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-744353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-744353: (4.755588484s)
--- PASS: TestScheduledStopUnix (96.32s)

                                                
                                    
x
+
TestInsufficientStorage (9.59s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-449768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-449768 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.102138862s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eab42a10-fa9a-432b-95e9-f72f1fd8c055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-449768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ba0bbd0-c0c9-4b34-b931-7829e724cd9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"b251431c-90a1-410d-a2a0-f6af14b44e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"307e6c97-eb73-497a-bc4e-fee7279d0a5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig"}}
	{"specversion":"1.0","id":"a102098b-f9af-452c-83cc-206104023235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube"}}
	{"specversion":"1.0","id":"ea1708c2-18ea-4d2a-a844-bdd5690d7ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"70ca573e-f687-46a8-b1c2-6cdd4f866b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a40f2f40-4138-4bd8-bffc-c476cac2c276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"93e2a98b-c291-4771-a564-894e39847826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9c90ec0d-3a6a-4a37-93c1-dd9618aaa1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"027abcf5-377b-4c67-90d0-89fddc268b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9bb15e37-97cc-4935-8062-e1ea5fcbcfad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-449768\" primary control-plane node in \"insufficient-storage-449768\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5335327-66a4-423b-a262-005dcc928607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0af438bf-dc49-4b63-b567-aacc6cd8b030","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"69a64559-bda2-474e-8fc5-2ad0fbb3e9f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-449768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-449768 --output=json --layout=cluster: exit status 7 (285.404234ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-449768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-449768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1102 13:28:37.991172  183208 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-449768" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-449768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-449768 --output=json --layout=cluster: exit status 7 (282.042401ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-449768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-449768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1102 13:28:38.273519  183320 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-449768" does not appear in /home/jenkins/minikube-integration/21808-9416/kubeconfig
	E1102 13:28:38.283461  183320 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/insufficient-storage-449768/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-449768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-449768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-449768: (1.920032652s)
--- PASS: TestInsufficientStorage (9.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3246581122 start -p running-upgrade-637676 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3246581122 start -p running-upgrade-637676 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.701402439s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-637676 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-637676 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.314341714s)
helpers_test.go:175: Cleaning up "running-upgrade-637676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-637676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-637676: (2.436845402s)
--- PASS: TestRunningBinaryUpgrade (49.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.78448487s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-273161
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-273161: (1.947908348s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-273161 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-273161 status --format={{.Host}}: exit status 7 (113.425259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.37953125s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-273161 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.024808ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-273161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-273161
	    minikube start -p kubernetes-upgrade-273161 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2731612 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-273161 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-273161 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.073540864s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-273161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-273161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-273161: (3.774648855s)
--- PASS: TestKubernetesUpgrade (309.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (85.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3528477507 start -p missing-upgrade-777335 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3528477507 start -p missing-upgrade-777335 --memory=3072 --driver=docker  --container-runtime=crio: (34.518936821s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-777335
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-777335: (10.446754889s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-777335
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-777335 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-777335 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.366925973s)
helpers_test.go:175: Cleaning up "missing-upgrade-777335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-777335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-777335: (4.79521964s)
--- PASS: TestMissingContainerUpgrade (85.77s)

                                                
                                    
x
+
TestPause/serial/Start (49.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-058363 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-058363 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.612055911s)
--- PASS: TestPause/serial/Start (49.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-058363 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-058363 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.633821092s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.299291ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-784609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-784609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-784609 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.977582563s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-784609 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-123357 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-123357 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (173.359384ms)

                                                
                                                
-- stdout --
	* [false-123357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:29:49.878820  205512 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:29:49.879111  205512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:49.879122  205512 out.go:374] Setting ErrFile to fd 2...
	I1102 13:29:49.879129  205512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:29:49.879351  205512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-9416/.minikube/bin
	I1102 13:29:49.879839  205512 out.go:368] Setting JSON to false
	I1102 13:29:49.880923  205512 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4342,"bootTime":1762085848,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1102 13:29:49.881021  205512 start.go:143] virtualization: kvm guest
	I1102 13:29:49.882753  205512 out.go:179] * [false-123357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1102 13:29:49.883928  205512 notify.go:221] Checking for updates...
	I1102 13:29:49.883946  205512 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:29:49.885165  205512 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:29:49.886248  205512 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-9416/kubeconfig
	I1102 13:29:49.887398  205512 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-9416/.minikube
	I1102 13:29:49.888686  205512 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1102 13:29:49.889856  205512 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:29:49.891661  205512 config.go:182] Loaded profile config "NoKubernetes-784609": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:49.891811  205512 config.go:182] Loaded profile config "cert-expiration-110310": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:49.891956  205512 config.go:182] Loaded profile config "cert-options-514605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:29:49.892096  205512 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:29:49.917948  205512 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1102 13:29:49.918083  205512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:29:49.979373  205512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-02 13:29:49.968538871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1102 13:29:49.979479  205512 docker.go:319] overlay module found
	I1102 13:29:49.981014  205512 out.go:179] * Using the docker driver based on user configuration
	I1102 13:29:49.982055  205512 start.go:309] selected driver: docker
	I1102 13:29:49.982066  205512 start.go:930] validating driver "docker" against <nil>
	I1102 13:29:49.982079  205512 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:29:49.983677  205512 out.go:203] 
	W1102 13:29:49.984817  205512 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1102 13:29:49.985925  205512 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-123357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-110310
contexts:
- context:
cluster: cert-expiration-110310
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-110310
name: cert-expiration-110310
current-context: ""
kind: Config
users:
- name: cert-expiration-110310
user:
client-certificate: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.crt
client-key: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-123357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-123357"

                                                
                                                
----------------------- debugLogs end: false-123357 [took: 3.333494454s] --------------------------------
helpers_test.go:175: Cleaning up "false-123357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-123357
--- PASS: TestNetworkPlugins/group/false (3.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3237968385 start -p stopped-upgrade-043602 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3237968385 start -p stopped-upgrade-043602 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.73603533s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3237968385 -p stopped-upgrade-043602 stop
E1102 13:30:45.148637   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/functional-529076/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3237968385 -p stopped-upgrade-043602 stop: (2.350876656s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-043602 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-043602 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.34396475s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.450624703s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-784609 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-784609 status -o json: exit status 2 (373.313162ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-784609","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-784609
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-784609: (4.723157546s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-784609 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.732043805s)
--- PASS: TestNoKubernetes/serial/Start (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-784609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-784609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.395876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (4.493521454s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.850718259s)
--- PASS: TestNoKubernetes/serial/ProfileList (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-784609
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-784609: (1.362009003s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-784609 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-784609 --driver=docker  --container-runtime=crio: (6.529333017s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-784609 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-784609 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.187767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-043602
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.556258156s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.311393943s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-123357 "pgrep -a kubelet"
I1102 13:32:18.227320   12914 config.go:182] Loaded profile config "auto-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5r8zb" [35c8e358-7856-4d24-b4ab-61562f996d7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5r8zb" [35c8e358-7856-4d24-b4ab-61562f996d7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004073129s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.171362034s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jrgqm" [0aee6955-3ae3-474c-9bce-53771bfd952e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00356944s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-123357 "pgrep -a kubelet"
I1102 13:32:40.378062   12914 config.go:182] Loaded profile config "kindnet-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hhwqb" [87e1c5dc-0172-4834-b0bd-9f683035e9bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hhwqb" [87e1c5dc-0172-4834-b0bd-9f683035e9bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004163235s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.043875521s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.484995798s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-79n2g" [94012b46-6316-41fc-8bbc-959adec7e8cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003186558s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-123357 "pgrep -a kubelet"
I1102 13:33:26.336596   12914 config.go:182] Loaded profile config "calico-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5gsbl" [3a185a6c-d07d-41ea-8cd3-d58440611c67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5gsbl" [3a185a6c-d07d-41ea-8cd3-d58440611c67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003718364s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-123357 "pgrep -a kubelet"
I1102 13:33:34.519033   12914 config.go:182] Loaded profile config "custom-flannel-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pd8rz" [4d7c2e7f-70e3-43c6-bbdd-59a9b94e700d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pd8rz" [4d7c2e7f-70e3-43c6-bbdd-59a9b94e700d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003568675s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-123357 "pgrep -a kubelet"
I1102 13:33:49.137873   12914 config.go:182] Loaded profile config "enable-default-cni-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kdg8k" [2b2095a3-300c-4b98-bbeb-2908eb897a11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kdg8k" [2b2095a3-300c-4b98-bbeb-2908eb897a11] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003880126s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.85470931s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-123357 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.183883057s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-054159 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-054159 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.56746595s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gvq9b" [d99f81b6-04a4-4e07-9e3d-cef8c4ee7b8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004423753s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-123357 "pgrep -a kubelet"
I1102 13:34:53.530477   12914 config.go:182] Loaded profile config "flannel-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wtwls" [ecacb40e-d668-4710-9aa5-3d812a093176] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1102 13:34:54.687964   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wtwls" [ecacb40e-d668-4710-9aa5-3d812a093176] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003939256s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-054159 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b69c12c6-19df-47c3-8096-02b70e53bbd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b69c12c6-19df-47c3-8096-02b70e53bbd1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003585642s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-054159 exec busybox -- /bin/sh -c "ulimit -n"
I1102 13:35:15.733056   12914 config.go:182] Loaded profile config "bridge-123357": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-123357 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-123357 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-96xhh" [b11d7a76-eb55-43fb-a155-19c498af44d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-96xhh" [b11d7a76-eb55-43fb-a155-19c498af44d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00417358s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-054159 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-054159 --alsologtostderr -v=3: (16.986818309s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.425658114s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-123357 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-123357 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E1102 13:37:44.319104   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159: exit status 7 (101.665761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-054159 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-054159 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-054159 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.945116163s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054159 -n old-k8s-version-054159
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.533403484s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.395591507s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-978795 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a73e312f-e302-474f-9e60-484d384e49da] Pending
helpers_test.go:352: "busybox" [a73e312f-e302-474f-9e60-484d384e49da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a73e312f-e302-474f-9e60-484d384e49da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003979459s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-978795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-978795 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-978795 --alsologtostderr -v=3: (16.289301725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4njq9" [072adefb-0813-4d34-9eab-d29bfbadd004] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003949034s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4njq9" [072adefb-0813-4d34-9eab-d29bfbadd004] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003083003s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-054159 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-748183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d262d7c9-f896-4859-90ca-aa663e85851a] Pending
helpers_test.go:352: "busybox" [d262d7c9-f896-4859-90ca-aa663e85851a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d262d7c9-f896-4859-90ca-aa663e85851a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003700081s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-748183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054159 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795: exit status 7 (80.628483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-978795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-978795 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.565757213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-978795 -n no-preload-978795
E1102 13:37:34.064913   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:34.071325   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:34.082728   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-748183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-748183 --alsologtostderr -v=3: (18.124588843s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.225710594s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4c549d03-4904-4a3b-b321-820059b96c9e] Pending
helpers_test.go:352: "busybox" [4c549d03-4904-4a3b-b321-820059b96c9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4c549d03-4904-4a3b-b321-820059b96c9e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.002631484s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-538419 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-538419 --alsologtostderr -v=3: (18.753144116s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183: exit status 7 (100.090881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-748183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-748183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.337953109s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748183 -n embed-certs-748183
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-066482 --alsologtostderr -v=3
E1102 13:37:18.431890   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.438261   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.449832   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.471209   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.512649   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.594126   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:18.756422   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:19.078629   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-066482 --alsologtostderr -v=3: (2.763733086s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419: exit status 7 (95.692504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-538419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1102 13:37:19.722281   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-538419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.687120267s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-538419 -n default-k8s-diff-port-538419
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482: exit status 7 (103.90112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-066482 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1102 13:37:21.004235   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:23.566080   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:28.687647   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-066482 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.850069412s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-066482 -n newest-cni-066482
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-066482 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1102 13:37:34.104162   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hnwjb" [8c6b1c1c-51c2-4e5c-a9d2-bc1abb547fdf] Running
E1102 13:37:34.146413   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:34.227870   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:34.389388   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:34.711498   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534996s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hnwjb" [8c6b1c1c-51c2-4e5c-a9d2-bc1abb547fdf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003121025s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-978795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-978795 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4hjh" [5163c067-aafb-41eb-bfce-05f4754d5cbc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003826662s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t4hjh" [5163c067-aafb-41eb-bfce-05f4754d5cbc] Running
E1102 13:37:59.411951   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/auto-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004343298s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-748183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-748183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zcdhn" [a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003217781s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zcdhn" [a8adaa2c-97af-4c5c-8dea-76b5d7fe8f9d] Running
E1102 13:38:15.043427   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/kindnet-123357/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003221185s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-538419 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-538419 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-123357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-110310
contexts:
- context:
cluster: cert-expiration-110310
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-110310
name: cert-expiration-110310
current-context: ""
kind: Config
users:
- name: cert-expiration-110310
user:
client-certificate: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.crt
client-key: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-123357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-123357"

                                                
                                                
----------------------- debugLogs end: kubenet-123357 [took: 3.362509001s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-123357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-123357
--- SKIP: TestNetworkPlugins/group/kubenet (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1102 13:29:54.687127   12914 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/addons-341255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-123357 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-123357" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-9416/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-110310
contexts:
- context:
cluster: cert-expiration-110310
extensions:
- extension:
last-update: Sun, 02 Nov 2025 13:29:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-110310
name: cert-expiration-110310
current-context: ""
kind: Config
users:
- name: cert-expiration-110310
user:
client-certificate: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.crt
client-key: /home/jenkins/minikube-integration/21808-9416/.minikube/profiles/cert-expiration-110310/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-123357

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-123357" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-123357"

                                                
                                                
----------------------- debugLogs end: cilium-123357 [took: 4.029881787s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-123357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-123357
--- SKIP: TestNetworkPlugins/group/cilium (4.23s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-560932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-560932
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard